/cdn.vox-cdn.com/uploads/chorus_image/image/60982167/usa_today_10725565.0.jpg)
Outside of the BCS, there have been few analytical tools in college sports that have generated as much intense passion, proponents, Jeremiads, and controversy as the RPI formula.
It shouldn’t be so despised, when taken at face value, right? The RPI seems at first blush a laudable — if not simplified — thumbnail of a team’s performance against its schedule. Per the ole’ Wiki:
[The] index comprises a team’s winning percentage (25%), its opponents’ winning percentage (50%), and the winning percentage of those opponents’ opponents (25%). The opponents’ winning percentage and the winning percentage of those opponents’ opponents both comprise the strength of schedule (SOS). Thus, the SOS accounts for 75% of the RPI calculation and is 2/3 its opponents’ winning percentage and 1/3 its opponents’ opponents’ winning percentages.
Strength of schedule is undoubtedly a factor we’d like to know when trying to evaluate meritorious teams for post-season play. But, as you can see from the math, the results skew away from what a team does on the court and veer much more heavily into SOS. We are left with a bizarre formula where an opponent’s record-against-a-third-party-non-opponent counts as much as whether or not your team won its games. Who you play matters — but 75% of the formula?
This, at its heart, is the fundamental failing of the RPI and one of its longest-standing criticisms.
Today, the NCAA drove a stake into the heart of the Spreadsheet Nosferatu that has stalked college athletics since 1981. It shall be replaced with the “NET”:
The NCAA Evaluation Tool, which will be known as the NET, relies on game results, strength of schedule, game location, scoring margin, net offensive and defensive efficiency, and the quality of wins and losses. To make sense of team performance data, late-season games (including from the NCAA tournament) were used as test sets to develop a ranking model leveraging machine learning techniques. The model, which used team performance data to predict the outcome of games in test sets, was optimized until it was as accurate as possible. The resulting model is the one that will be used as the NET going forward.
The move towards a more facile data analysis is undoubtedly a good thing if determining the best 68 is the goal. As you will note in the NET factors, they are very similar to the numbers already crunched by the far superior KenPom rankings. But, like the BCS, be sure to expect the Luddites and “leave computers out of it, nerds” crowd to cry foul.
For Alabama fans, you are rightly forgiven if you want to lament the passing of the RPI. The Tide’s last two tourney appearances and seedings have very much been a credit to who it plays — and who they played-played. The NET system, which analyzes late season wins and offensive efficiency may have very well led to a far different seeding, if not outcome, in Alabama’s most recent NCAA appearance.
You can already foresee some unintended consequences naturally— the margin of victory factor, for instance, is going to lead to a lot more cases where teams run it up on their hapless foes (although, it is capped at 10 points — daresay teams won’t be holding the ball in the last 90 seconds to shorten the game.) Likewise, unless your name is Villanova, ball movement and a sweet layup is going to be far more efficient than wide-open perimeter shooting, the direction in which college basketball has moved towards.
The complete press release, details on the NET, and more are here. So, what do we think of this? It’s not a question that really lends itself to a poll, so feel free to set forth your epistles below.