Journal of Quantitative Analysis in Sports
An official journal of the American Statistical Association
Editor-in-Chief: Glickman, PhD, Mark
SCImago Journal Rank (SJR) 2014: 0.265
Source Normalized Impact per Paper (SNIP) 2014: 0.513
Impact per Publication (IPP) 2014: 0.452
Volume 11 (2015)
Volume 10 (2014)
Volume 9 (2013)
Volume 5 (2009)
Volume 1 (2005)
Most Downloaded Articles
- Creating space to shoot: quantifying spatial relative field goal efficiency in basketball by Shortridge, Ashton/ Goldsberry, Kirk and Adams, Matthew
- Predicting the draft and career success of tight ends in the National Football League by Mulholland, Jason and Jensen, Shane T.
- A generative model for predicting outcomes in college basketball by Ruiz, Francisco J. R. and Perez-Cruz, Fernando
- Building an NCAA men’s basketball predictive model and quantifying its success by Lopez, Michael J. and Matthews, Gregory J.
- openWAR: An open source system for evaluating overall player performance in major league baseball by Baumer, Benjamin S./ Jensen, Shane T. and Matthews, Gregory J.
Ranking rankings: an empirical comparison of the predictive power of sports ranking methods
1Pitzer College, Department of Mathematics, 1050 North Mills Avenue, Claremont, CA 91711, USA
2UCLA, Department of Mathematics, 405 Hilgard Avenue, Los Angeles, CA 90095, USA
Citation Information: Journal of Quantitative Analysis in Sports. Volume 9, Issue 2, Pages 187–202, ISSN (Online) 1559-0410, ISSN (Print) 2194-6388, DOI: 10.1515/jqas-2013-0013, May 2013
- Published Online:
In this paper, we empirically evaluate the predictive power of eight sports ranking methods. For each ranking method, we implement two versions, one using only win-loss data and one utilizing score-differential data. The methods are compared on 4 datasets: 32 National Basketball Association (NBA) seasons, 112 Major League Baseball (MLB) seasons, 22 NCAA Division 1-A Basketball (NCAAB) seasons, and 56 NCAA Division 1-A Football (NCAAF) seasons. For each season of each dataset, we apply 20-fold cross validation to determine the predictive accuracy of the ranking methods. The non-parametric Friedman hypothesis test is used to assess whether the predictive errors for the considered rankings over the seasons are statistically dissimilar. The post-hoc Nemenyi test is then employed to determine which ranking methods have significant differences in predictive power. For all datasets, the null hypothesis – that all ranking methods are equivalent – is rejected at the 99% confidence level. For NCAAF and NCAAB datasets, the Nemenyi test concludes that the implementations utilizing score-differential data are usually more predictive than those using only win-loss data. For the NCAAF dataset, the least squares and random walker methods have significantly better predictive accuracy at the 95% confidence level than the other methods considered.