Wednesday, February 17, 2016

What The Saint Mary's Fade Says About Projecting The Future

Randy Bennett has done an incredible job with Saint Mary's this season.
Saint Mary's had a reasonably strong team last season. They ended up coming up short of the NCAA Tournament, but they earned a 4 seed to the NIT. Still, with five senior starters, they were expected to have a severe rebuilding season in 2015-16. They were picked just 4th in the preseason WCC poll. Yet by late January they were 18-2, and were picked as an NCAA Tournament team in every single bracket in the Bracket Matrix, with a projected 9 seed.

Unfortunately for Saint Mary's, they've come back to Earth since. After climbing to 16th in the Pomeroy ratings after their January 7th rout of Loyola-Marymount, they are just 43rd now. They have also lost 3 of 9 games, after a 14-1 start. They have also slipped out of the Bracket Matrix.

It is the rise of Saint Mary's, and then their decline, which tells us something interesting not just about 2015-16 Saint Mary's, but about future projections in general.

Back when Saint Mary's was at their peak, I suggested two reasons that they might regress: Their lack of road games, and the fact that they were leading the nation in 3P% and eFG% (see here and here). We can't really examine the first reason in any quantitative way, but we can examine the latter. In mid-January, Saint Mary's was leading the nation in 3P% shooting. What have they done since? A sharp decline:
Note that the four yellow circles mark their four losses. Also note that the game against non-Division I San Francisco State has been excluded.

What has this decline in outside shooting done to the Saint Mary's offense? A lot. Below I have plotted adjusted offensive and defensive efficiencies by game. Offense is in blue and defense is in red. To define terms: an adjusted offensive efficiency of 1.2 means that they scored 20% more PPP than their opponent allowed during the season (using Pomeroy adjusted efficiencies). An adjusted defensive efficiency of 1.1 means that they allowed 10% fewer PPP than their opponent scored during the season.
In the above chart, I have fit a straight line to the data to look for a trend during the season. Their fitted adjusted defensive efficiency has declined from 1.15 to 1.07 over the course of the season, a 7% decrease. In contrast, their adjusted offensive efficiency has declined from 1.31 to 1.00, a 24% decrease.

How bad has the shooting slump been which caused this decline in offense, and how concentrated has it been in three-point shooting? Well, we can break the season in half: The 14 Division I games prior to them hitting their season-high Pomeroy rating (all games up to an including the Loyola-Marymount game on January 7th) and the 9 games since.

During the 14 games of the Saint Mary's rise, they hit 47.1% of their threes, 58.8% of their twos, and 69.7% at the line. During the 9 games of their decline, they've hit 35.6% of their threes, 52.3% of their twos, and 69.1% of their free throws. That comes out to a 1% decline in FT shooting, an 11% decline in 2P% shooting, and a 24% decline in 3P% shooting, perfectly in line with their 24% offensive efficiency decline.

-----------------------------------------------

The data is clear: Saint Mary's shot atypically well in the first half of the season, and that shooting has regressed to the mean ever since.

What does this say about projections? It says that "luck" and randomness do not just apply to stats primarily out of control of a team, such as winning percentage in close games, FT% defense and 3P% defense. Even when something is a clear skill, and three-point shooting is clearly a skill, there is still random variance in performance.

In other words, when Saint Mary's was shooting 47.1% of their threes 14 games into the season, it did not mean that they were a true 47% three-point shooting team. They were probably something like a 37% or 39% true three-point shooting team that had some good shooting luck over a 14 game stretch.

Statistically, fans generally misuse P-values in this context. For example, let's say that I argued that Saint Mary's was a true 39% three-point shooting team. The Gaels shot 40%+ in each of their ten first games. A fan might argue (and I have gotten this argument from fans of teams before) that the odds of a team outdoing their true three-point percentage ten games in a row is (0.5)^10, or 1024-to-1. Seems highly unlikely! But that's an improper use of P-values. Statistically, one would expect some of the 351 Division I teams to shoot above their average in 10 straight games at some point during the season. If you flip a quarter enough times, it is inevitable that you will eventually get 10 heads in a row, and if you have enough teams then you will eventually have one shoot above average in ten games in a row.

How does one quantify this effect? It's hard. How does one say on January 7th what the true Saint Mary's 3P% should have been? You really can't. But from a qualitative perspective, we can argue that a team leading the nation in 3P%, which was well above any shooting levels in recent seasons, is likely to regress over the rest of the season. In the end, it's hard to ever be much more precise than that.

No comments: