Wednesday, June 10, 2015

Small Sample Size Theater: NBA Finals Edition


The above video is one of my favorites on Youtube. It's by Ted Berg, a baseball writer. The premise of the video is simple: weird things happen in small sample sizes.

When the sports media debates analytics, they like to frame it as stats vs scouting (this is the way that Boston Globe morons like to frame it, for example). But this is nonsense. Statistics and scouting must go hand-in-hand. And what matters is significance.

What this means is that over very small sample sizes, statistics don't really tell us anything. If a guy goes 1-for-2 on three-pointers in a game, this tells us basically nothing about whether he is a good shooter or not. In that case, I'd like to see what a scout says about the way he shoots. But as the sample size increases, data becomes very valuable. If a guy attempts 200 three-pointers in a season and hits them at a 42% clip, this is fairly good evidence that he is a good shooter. It's possible he's just a 34% shooter who got lucky over a 200 shot sample, but it's unlikely. At that point, does the data correlate with the scouting? If not, that warrants further investigation.

A lot of work has been done on data significance in baseball. For example, this classic from almost eight years ago attempts to quantify how many at-bats are required for a hitter or pitcher's stats in baseball to become statistically meaningful.

The driving story of the NBA Finals so far, which the heavily favored Golden State Warriors are losing two games to one, is that the Warriors have shot poorly. After the first two games, where Steph Curry shot poorly, the popular narrative was that Curry's big fall late in the Western Conference Finals was severely hampering him:
I made a not particularly bold prediction:


Steph Curry then shot precisely 7/13 on threes in Game 3. Have no fear, though, because we have a new narrative! Now it's not that Curry is hurt, but that the Cavs defense is incredible, and that specifically Matthew Dellavedova is shutting everybody down:
Now you should immediately be skeptical of this new narrative for two reasons. First, it's piggybacking on data that had previously been used to justify a different narrative. Second, it plays really nicely into a very popular media meme, the gritty white guy who hustles and makes winning plays. In fact, the narrative even got boiled down into sub-game segments:
To be clear, I'm not trying to pick on Jason McIntyre here. Jason is a nice guy, and his takes aren't any different from pretty much anybody else in the mainstream media. I'm just using him as an example.

Anyway, certainly it's true that scouting suggests that the Cavs played good defense in Game 3. And Dellavedova looked like he did a great job defensively as well. But do the numbers back that up?

Well, as soon as you start looking at the numbers you realize how comically small the sample sizes are that we are looking at. Warriors players who Dellavedova was defending in Game 3 attempted 16 shots. They were 3/8 on threes and 1/8 on mid-range jumpers, with no attempts near the hoop. This is a little bit of evidence of good defense, but also clearly small sample size data that will regress. For example, every NBA team allowed opposing teams to hit mid-range jumpers at a rate of somewhere between 37% and 41% this past season. If we look at individual players, the lowest FG% allowed on mid-range jumpers for players defending at least 300 mid-range shots was 32%, done by both Anthony Davis and Draymond Green. Marc Gasol was third best at 33%. So is Dellavedova's 13% mid-range jumper defense in one game meaningful? No.

Well okay, what about the Cavs defense in general? We can separate out shots by how close the defender is to the shooter. All data for "Defensive Contest Distance", as it is called, are coming from ShotAnalytics. During the entire regular season, the Warriors had a 48 eFG% on shots taken with a defender within two feet of the shooter, 53 eFG% when it was three or four feet, and 58 eFG% when it was five or more feet. In other words, defending a guy matters. Not as much as you might think, but it matters.

In Game 3? When the defender was within two feet of the shooter, the Warriors were 1/2 on threes and 6/20 on twos, good for a 34 eFG%. What about when they were wide open, when the defender was five or more feet away? The Warriors were 5/21 (24%) on threes and 5/12 on twos, good for a 38 eFG%. We can visualize the data on wide open (5+ feet) shots for Game 3 vs the regular season below:

So what does the data say? The data is consistent with the Cavs defense playing well. On shots close to the basket and in the mid-range, while being defended closely, the Warriors shot worse than they normally did in those same situations during the regular season. It's not proof that the Cavs defense played great, but the fact that it's consistent with what we all thought we subjectively saw suggests that's probably what happened.

However, the data says that the biggest reason the Warriors struggled to score was not the Cavs defense but in fact their wide-open jump shooting. The Warriors actually shot 24% on threes when the defender was five or more feet away versus 54% when the defender was four or fewer feet away. They shot far better when being defended on threes than when they were wide open. Do the Warriors actually prefer to be defended closely when they shoot? Of course not. But weird things happen in sample sizes as small as 21 shots. The Warriors happened to brick a bunch of open jumpers in Game 3. Will they do it in Game 4? Maybe, maybe not.

When you see how quickly the narratives fall apart while looking at complete games by complete teams, you realize just how tenuous the narratives are when we look at single player+single game narratives. "Was a guy clutch last night?", for example. In that case we are generally looking at sample sizes of a single shot, which is ludicrous.

Generally speaking, when debating things like clutch play, the stats are just dropped altogether. It seems like the majority of ESPN's programming the past week has been centered around whether Lebron is more clutch than Michael Jordan was in the playoffs. Quick: Tell me what Michael Jordan's FG% was in the final minute of close playoff games in his career? Or what his FG% was in playoff games in general? I'm sure you don't know. I don't know either. Neither does anybody else debating how clutch he was.

So remember when you watch Game 4 that the winner of the game will very likely come down to which team happens, on that night, to have a better day hitting wide open jump shots. And due to randomness, we have no way to know in advance which team that will be.

I often hear the complaint that analytics makes sports boring. I counter that the previous paragraph is precisely why analytics make sports more exciting. If you really honestly believe that Matthew Dellavedova causes the Golden State 3P% to drop in half and that he and Lebron are willing the team to victories with their #clutchiness, then why bother watching Game 4? You already know that the Cavs will win. In contrast, I have no idea who will win Game 4. I accept and embrace the randomness of small sample size sporting events. You should, too.

No comments: