/cdn.vox-cdn.com/uploads/chorus_image/image/39484054/131562046.0.jpg)
All statistics are courtesy of Football Outsiders, home of the F/+ Combined Ratings for college football.
The S&P+ rating was created by Bill Connelly; check out his college football analytics blog, Football Study Hall.
Last month I introduced a concept called Adjusted F/+ Combined Ratings as a way to compare teams from different seasons. For a number of reasons, it is inappropriate to directly compare F/+ ratings across seasons, so they require a little massaging before you can say the 2011 Alabama Crimson Tide were better than the 2005 Texas Longhorns, and so on.
At the tail end of the Method section of that article I noted that approach still does not account for the small change in the "average" team from year to year. I suspected the change was not very large (it’s not), but it sure would be nice to be able to account for that when doing a comparison like this.
The problem is that all of the FEI data provided to the public is already normalized within the context of that season. There’s no unadjusted version that could be used to create a baseline. To pull this off, you’d need some kind of un-normalized metric that existed in each season, which you could then index off of for each season and adjust the F/+ ratings accordingly.
Oh, hey! S&P+ can do that!
The Method
Once again, hidden for those of you who don’t like to know how the sausage is made.
This one’s pretty straightforward. Below is a chart showing the average S&P+ score from each season:
Season | Average S&P+ | Modifier |
2005 | 204.83 | -0.65% |
2006 | 205.80 | -0.19% |
2007 | 205.47 | -0.34% |
2008 | 206.69 | 0.25% |
2009 | 207.91 | 0.84% |
2010 | 205.72 | -0.22% |
2011 | 206.77 | 0.29% |
2012 | 206.18 | 0% |
2013 | 206.21 | 0.01% |
All-Time Average | 206.18 |
The "modifier" column is just the percent difference between that year’s average S&P+ score and the average S&P+ score across all seasons given in the last row of the table.
For those of you that enjoy your data in chart form:
Let me be very clear about that chart – the way I formatted it makes it easy to look at, but kind of inflates the differences between each year. We’re talking about very, very small fluctuations. Yes, the average team according to S&P+ was better in 2009 than in any other season, but not by much. A point and a half difference in S&P+ is effectively nothing.
That being said, you can start to see how this will end up affecting our adjusted ratings. Teams in 2008, 2009, and 2011 are going to get a slight bump, whereas teams in 2005-2007 and 2010 will get downgraded a tad. Teams from the last two years will not see much change – 2012* in particular.
* I should pause here to offer a hat tip to the 2012 Connecticut Huskies. With an F/+ rating of zero in the most average year in the F/+ era, they’ve achieved mediocrity in its most perfect form. Congratulations guys!
From this point, it’s very easy. Just add the appropriate modifier from the table above to the F/+ rating for that team, then do the normalization adjustment** described in the previous article, and voila, we have what I’m calling F/++ rating.
** I checked normality for the pre-normalized adjusted F/+ using this method, as well as the offensive and defensive splits presented in the next section - everything checked out ok.
What has this new adjustment done? Just reoriented each year’s average to a given baseline. Below is a (highly, HIGHLY exaggerated) reference chart of some box plots (remember those?) I drew up – not based on any data, just as a visualization. Note that on the left, we have these different plots with different true means, and we’ve selected a data point from each year with the same value as in other years. On the right, we’ve applied the adjustment, and now our true means line up, but the way the data is spread has changed as a result. Those markers are no longer along the same horizontal line, and now we can properly evaluate the difference between those data points. The mean in each year will come out to different non-zero values based on the year, but the true means are now equivalent:
So, what does this give us? An opponent-adjusted metric, accounting for both drive-based and play-based efficiency and explosiveness, adjusted to a baseline average for the era, and adjusted within each season for the size of the division and any differences in the basic F/+ formula (chiefly, the addition of special teams ratings in 2007). This does assume that the fluctuations present in S&P+ from year-to-year also manifest in FEI and FEI Special Teams efficiency, but I don’t think that’s too much of a stretch. As far as I’m concerned, this is about the best we can do to compare teams across seasons within the F/+ era.
For those of you who skipped the above, I’m calling this new adjustment the F/++ rating. As before, the best rating a team can achieve is a 50%, and worst is a -50%. I also looked at the top 10 offenses and defenses of the era (since 2007; that’s how far back the splits go) using the same method. Without further ado:
The Goods
The Top 20 Teams by F/++ Rating, 2005-2013 | ||||
---|---|---|---|---|
Team | Previous Rating | Previous Rank | F/++ Rating | F/++ Rank |
2011 Alabama | 49.87% | 1 | 49.88% | 1 |
2008 Florida | 49.84% | 2 | 49.84% | 2 |
2012 Alabama | 49.82% | 3 | 49.82% | 3 |
2008 USC | 49.77% | 5 | 49.78% | 4 |
2005 Texas | 49.78% | 4 | 49.75% | 5 |
2011 LSU | 49.56% | 6 | 49.58% | 6 |
2005 USC | 49.43% | 7 | 49.35% | 7 |
2009 Alabama | 49.21% | 9 | 49.31% | 8 |
2013 Florida St. | 49.29% | 8 | 49.29% | 9 |
2009 Florida | 49.15% | 10 | 49.26% | 10 |
2010 Boise St. | 48.97% | 11 | 48.94% | 11 |
2009 Texas | 48.67% | 14 | 48.83% | 12 |
2005 Ohio St. | 48.95% | 12 | 48.81% | 13 |
2006 LSU | 48.73% | 13 | 48.68% | 14 |
2012 Oregon | 48.61% | 16 | 48.61% | 15 |
2010 Alabama | 48.65% | 15 | 48.60% | 16 |
2008 Texas | 48.52% | 18 | 48.57% | 17 |
2006 Florida | 48.58% | 17 | 48.54% | 18 |
2010 Auburn | 48.43% | 19 | 48.38% | 19(t) |
2011 Okla. St. | 48.31% | 20 | 48.38% | 19(t) |
The Top 10 Offenses by OF/++ Rating, 2007-2013 | ||
---|---|---|
Team | OF/++ Rating | OF/++ Rank |
2010 Auburn | 49.78% | 1 |
2007 Florida | 49.76% | 2 |
2011 Wisconsin | 49.57% | 3 |
2012 Texas A&M | 49.55% | 4 |
2008 Florida | 49.52% | 5 |
2011 Baylor | 49.23% | 6 |
2012 Alabama | 49.16% | 7 |
2009 Alabama | 49.05% | 8 |
2008 USC | 48.95% | 9 |
2009 Houston | 48.77% | 10 |
The Top 10 Defenses by DF/++ Rating, 2007-2013 | ||
---|---|---|
Team | DF/++ Rating | DF/++ Rank |
2011 Alabama | 49.99% | 1 |
2010 Boise State | 49.80% | 2 |
2008 USC | 49.76% | 3(t) |
2009 Florida | 49.76% | 3(t) |
2011 LSU | 49.71% | 5 |
2012 Alabama | 49.65% | 6 |
2009 Alabama | 49.58% | 7(t) |
2008 TCU | 49.58% | 7(t) |
2008 Florida | 49.22% | 9(t) |
2009 Oklahoma | 49.22% | 9(t) |
A Few Observations
-
The overall ratings don’t have any huge changes going on, just a reshuffling of the same top 20 as last time. I find it interesting that 2008 USC, a team that did not play for the national title, is now considered the fourth-best team by this metric. I talked about this last time as well, but I think that team really got shafted by the BCS. The loss to Oregon State seemed really bad at the time (unranked team, all that nonsense), but that team ended up at 9-4 and ranked 30th in the final F/+ rankings for 2008, and the loss was by 6 in Corvallis.
Still not a "good" loss, but somewhat similar to what befell Florida in 2006 (10 point loss at Auburn, who ended up 20th in that year’s F/+ rankings, but was 11th in the polls at the time and much more well-regarded than Oregon State), except Florida had the opportunity to embarrass Ohio State at the end of the year, much to my delight and that of many SEC fans.
-
Probably the most shocking result is the DF/++ rating for the 2011 Crimson Tide, which is stupid high at 49.99%. Again, based on how this metric is set up, the best score you can achieve is a 50%, and this unit was off by a hundredth of a percent. Unlike the offense chart, this unit is clearly the best defense of the era, as the #2 defense is almost two tenths of a percent lower (vs. the two hundredths of a percent difference between 2010 Auburn and 2007 Florida).
That defense is probably the best we’ll see for a long, long time. 12 men from the three deep are currently on NFL rosters, and two more (Xzavier Dickson, Trey DePriest) may be there next season - absurd.
-
Most of the teams in the top offenses list had woeful defenses. The exceptions? 2008 Florida, 2012 Alabama, 2009 Alabama, and 2008 USC. That would be three national champions and a team that should have had the opportunity to play for one #defensewinschampionships. 2010 Auburn unfortunately won one as well –
sometimes luck and $200,000 wins championshipsthey also had the best offense of the last 7 years. -
Wisconsin? Yes, Wisconsin. This was the Russell Wilson year (you might have heard of him), and also the year Montee Ball ran for 1923 yards, caught 24 balls for 306 more, and tallied 39 touchdowns on the season (matching Barry Sanders' record, though it took three extra games). While not putting up crazy counting stats, when adjusted for pace this offense was nasty. Paired with kind of a crappy defense (317th in DF++), they stumbled in consecutive road games to Michigan State (avenged in the inaugural B1G
celebration of irrelevanceTitle Game) and Ohio State. In exchange for winning the conference they received an opportunity to deal with Oregon, losing by a touchdown in the Rose Bowl. -
The top defenses list is peppered with teams that also show up at the top of the F/++ charts #defensewinschampionships. The exceptions were 2008 TCU and 2009 Oklahoma, who finished 183rd and 323rd on the OF/++ list respectively.
Anything else yall think is worth mentioning from these lists? Let us know in the comments!