Tuesday, October 14, 2008

Ranking Conferences

I've talked in the past about how to rank (and how not to rank) teams, so it's worthwhile to discuss ranking conferences:

Why do we care how conferences are ranked?
We hear about conference rankings all the time during the college football season. We hear less of it during college basketball season. This is ironic, since conference rankings are actually much more important in college basketball than college football. The college football season is much shorter: Most teams don't play every team in their own conference, and none ever get a home-and-home series. In college basketball, on the other hand, every team plays every other team in their conference. They also play most (if not all) of those teams in a home-and-home.

Since there is much more randomness in the college football schedule, it's easy for a team to finish behind another team that they're better than. And nobody would make the argument "Team A is better than Team B because they both finished third in their conference, and Team A's conference is better" with a straight face. But in college basketball, one can make that argument somewhat plausible. And you also can say pretty confidently that the team finishing 3rd in a conference is better than the team that finished 6th.

Yet try watching a full football game between BCS conference teams without hearing the announcers talk about which conferences are better. You will rarely hear this during a college basketball game. Still, enough people care about it that it's worth discussing.

Why are most conference rankings wrong?

The Absurd.
One only has to read some blogs, or big message boards (like ESPN or CBS Sportsline) to hear some truly idiotic conference rankings. One of my favorites: You'll hear all the time that because Ohio State lost badly the last two National Title games to SEC teams that we can therefore conclude that the Big Ten is vastly inferior to the SEC. First of all, we know that the mere fact that LSU beat Ohio State in a game last season doesn't even make last year's LSU team better than last year's Ohio State team. It certainly doesn't make this year's LSU team better than this year's Ohio State team. And even that would discount the other 21 teams that play in the Big Ten or SEC.

The Slightly Less Absurd.
Interestingly enough, the arguments that you hear broadcast announcers and professional tv/radio analysts making aren't too much less absurd than the message board example. After week 4 of this college football season, for example, I heard a number of announcers and analysts declare the SEC the best conference in the land because they had three teams in the AP top five. This is absurd on its face because early season rankings have nothing to do with results. None of the teams in the top five (other than USC) had played a good team yet. So the fact that those teams were in the top five had to do with preseason expectations and bias, rather than any statement of fact about their abilities.

And even if one conference ends a season with three teams in the top five, this doesn't make any kind of strong statement about the strength of a conference anyway. The Mountain West has three teams in the USA Today Top 25, while the Pac-10 has two. Does that make the Mountain West better? Ah, but a Pac-10 fan will counter: The Pac-10 has one team in the Top 8 while the Mountain West has none. Ah, but the Mountain West fan will counter: The Mountain West has two teams in the Top 20 while the Pac-10 only has one. Ah, but the Pac-10 fan will counter: The Pac-10 has six teams in the Sagarin Top 50, while the Mountain West only has four. See how stupid this argument is? Let me expand:

Don't pick an arbitrary ranking level and count the number of teams within it
I know that's kind of a long title for a post section, but it's the point of my Mountain West/Pac 10 example. Unless two conferences are wildly far apart in ability level (such as the SEC and the Sun Belt), one can always come up with a ranking level that has more teams from one conference than the other. Yet for some reason, this is the most commonly used argument for why one conference is better than another. People think that "most teams in the Top 25" is a plausible argument since people are always seeing Top 25 rankings, but 25 is really an arbitrary number. Why not the Top 24 or 26? Why not the Top 15 or Top 40? Any arbitrary ranking system like this is bound to fail.

Which team is better: Kansas State or Mississippi State?
During this college football season, you've probably heard countless pundits debate whether the SEC or Big 12 is better, and you always hear them debating Florida, LSU, Alabama & Georgia vs. Texas, Oklahoma, Texas Tech & Missouri. But I'll bet that you haven't heard anybody debating whether Kansas State is better than Mississippi State. But why not? Why choose to compare only the top four teams in each conference? According to Sagarin, Kansas State and Mississippi State are the 9th best team (out of 12) in the Big 12 and SEC, respectively. So shouldn't we be debating teams all the way down a conference?

So the real question is: How do we weight teams?
Since it's wrong to arbitrarily count teams within the Top "n" spots of the AP poll, and it's also wrong to only focus on the top two or three teams in a conference, it's clear that we need to look at all of the teams in a conference. But certainly we need to weight these teams, because I think we would all agree that the difference between the top team being 2nd or 5th in the country is a lot more important than whether the last place team is 100th or 110th. If we again look at Sagarin (I'm continuing to use the same computer polling simply for consistency, not because it's the only choice in computer polling) we see that he ranks conference in two separate ways. One is to just average all of the teams in a conference, and the other is to take a "central mean", which discounts the rankings at the very top and bottom of a conference. This keeps one really good team from singlehandedly pulling up the ranking of a conference, and also keeps one really bad team from dragging it down.

So to repeat the question of this section: how should we weight the teams in a conference? I would argue that there is more than one answer to this, and it depends on why we are ranking the conferences. If we want to compare a team at the top of one conference against a team at the top of another conference to determine whether a 14-2 record is better in one conference than another, then we should probably be discounting the teams at the top of a conference. For example, a highly ranked Gonzaga will drag up the average rating of the WCC, but we can't then use that highly rated WCC as an argument for why Gonzaga is good - it's circular reasoning. We can only judge Gonzaga's conference record by rating WCC with a discounting for Gonzaga's own record.

While one would think it's silly that we judge conferences by whether their worst teams are bad or really bad, there is something to caring about those teams. Take the ACC and Big East conferences during the last college basketball season. The fact that the Big East had some near-automatic wins on the bottom and the ACC didn't has to be taken into account. You see this in the Sagarin ratings again: The ACC has a higher "simple average" than the Big East, but the Big East has the higher "weighted average." This tells us what we already knew: The Big East was stronger in the middle of the conference (hard to argue for Clemson, Miami, Maryland and Virginia Tech over West Virginia, Pitt, Marquette and Notre Dame), but the ACC was stronger at the very top and very bottom (UNC & Duke over Louisville & Georgetown... as well as Virginia & Boston College over South Florida & Rutgers). Therefore, both weightings of the teams tell us two sides to the same information, and I don't think any of us can say definitively which is the "correct" ranking.

The clear answer, therefore, is that we have to take all teams into account with some sort of weighting. But different weightings are all valid, and there is no clear "best" possible weighting. So since there are different possible ways to weight conference, it's also impossible to perfectly rank conferences. Ergo, all weighted conference rankings are decent, but imperfect. But a conference ranking system doesn't need to be perfect in order to be far better than anything you're hearing from tv/radio sports analysts right now.

No comments: