Miller looked at the past 75 years of college basketball tourney results and ranked the success of all 50 US states. Some of the rankings made sense, like having a top state ranking for traditional basketball power Kansas. However, other rankings didn’t make as much sense, such as having low state ranking for Florida, despite recent team tournament successes. So what is really driving these rankings?
To help understand this, we built our own study of historical tournament data available (and used with permission) from Sports Reference. We followed a similar method to the one Miller used, where the majority of the score comes from the tournament performance of basketball schools within each state. Making the tournament was worth one point, getting to the semifinals was worth four points, and then winning the title game was worth 10 points. Finally, we divided this total score by the number of eligible (Division I) teams in each state to provide a weighted score. The logic behind the weight is that some states have more eligible teams that could make the tournament than do other states. While Miller added additional multipliers and even a current component to his final score, our simpler formula returned a similar result that we can now visually explore.
Rather than showing these results in a just a list, we can use JMP 11 graphing, labeling and mapping to analyze these results quickly and visually. We first color-coded the map by the state weighted score and labeled each state by its respective rank. Also, we included a second view to zoom in for a clearer picture of the Northeast states.
For fun, we focused on four state comparisons that would interest many of us on the JMP team.
1) North Carolina vs. Kentucky – How can Kentucky at No. 1 (which is dominated by multiple title winner UK and a recent title win at UL) be ranked ahead of North Carolina at No. 6 (with multiple title winners at UNC, NCSU and Duke)?
2) California vs. Nevada – How can Nevada at No. 5 (with only a brief title stretch at UNLV) be ranked ahead of California at No. 11 (with record-setting titles at UCLA)?
3) New York vs. Michigan – How can Michigan at No. 10 (with strong programs at University of Michigan and MSU) be ranked so far ahead of New York at No. 27 (with strong programs in Syracuse & St. Johns)?
4) Texas vs. Oklahoma – How can Oklahoma at No. 3 (with occasional basketball tourney appearances by OU and OSU) be ranked ahead of Texas at No. 26 (with frequent tournament participation by UT, Texas A&M, Texas Tech, UTEP, Baylor and Houston)?
One possibility is that weighting the score by dividing the total score by the number of eligible teams in each state has a huge impact on the final ranking. Let’s see if a new feature in JMP 11 – geospatial mapping – can help us see this potential effect.
Immediately, we can see the huge impact that having more eligible teams (as seen by the bigger circles) exerts on the weighted score of the state. In our four state rivalries, the higher-ranked state got at times a sizable edge if it had fewer eligible programs in the state.
A JMP 11 scatterplot helps show the negative slope of the lines between our state comparisons. If we draw axis lines at 12.5, eligible teams and a weighted score of 30.00 to create quadrants, we can see that all our states with a large number of eligible schools fall into the bottom right quadrant of the chart.
So while the weighting wanted to take into account that some states had more opportunities (eligible programs) to place and win the tourney than other states, putting the number of eligible programs in the denominator of the formula had too much of an impact on the overall score and corresponding ranking of the state.
You could argue that the average college basketball fan would rate his or her state’s biggest basketball teams that play in the power conferences (like the North Carolina teams of UNC, NCSU, Duke, and Wake Forest in the ACC Conference) as more influential on state basketball supremacy than the weaker teams who play in mid major or smaller conferences (like Western Carolina, Elon, Appalachian State in the Southern Conference). So perhaps tweaking the formula to show only the raw total score (without any weighting) would be a fairer way to score basketball power. Even if your state has a lot of eligible basketball teams, there are only a few big basketball teams who play in the stronger conferences that really stand a good chance of getting higher points for semifinal and title tourney wins.
The map based on the total score rankings (unweighted) gives a very different view of where the top state basketball powers are. Now our previously down-weighted states of California, Texas, New York and North Carolina all finished much higher in the ranking and actually above their comparison states. Looking again at a scatterplot of our comparison states, we can see the magnitude of the differences as these states have moved up to or near the top-right quadrant and reversed the slope against their comparison state. While this may be a very basic way to calculate the rankings (without any weighting or adjusting), it provides a useful view that seems more in line with conventional knowledge and better represents top team performances within the states.
So the debate about the best way to measure the basketball power of a state will continue. However, we can see that it is important to really understand how rankings are constructed and to explore – visually, if possible – whether they are calculated fairly. So enjoy the basketball games, and may your state's teams go far in the tourney this year!