clock menu more-arrow no yes

Filed under:

The Historical | SEC Scheduling, Part III

New, comments

In Part III of this series, we attempt to analyze the results from part II to identify any sources of inequity in the SEC schedule.

You mean all my whining was for naught!?!
You mean all my whining was for naught!?!
Troy Taormina-USA TODAY Sports

All statistics are courtesy of Football Outsiders, home of the F/+ Combined Ratings for college football.
The S&P+ rating was created by Bill Connelly; check out his college football analytics blog, Football Study Hall.
All schedule information courtesy of Winsipedia.

Part I of this series, covering the history of the SEC and its football scheduling methodology, may be found here.
Part II of this series, covering the schedule rating methodology, may be found here.

Now that we have data, what does it tell us?

Tables of numbers are nice[1], but they don’t necessarily tell you anything useful. Any rational observer knows that conference schedule strength is cyclical, because conference strength is cyclical. Alabama, at least according to the advanced metrics, has been the SEC’s strongest team over the last eight years, and in turn the SEC has been the country’s elite football conference over the same timeframe. The decade before that? Not pretty for Alabama, and while the SEC was still a fine conference, claiming it was the best conference would be difficult.

1 | Especially when presented so colorfully!

The reason I bring all that up is just because a team happens to be at the bottom or the top of the chart is not, on its own, evidence of anything. Someone always has to be at the bottom of the chart; while I’m sure you could retroactively pump out a schedule in a hypothetical one-division conference that is relatively close to even strength for everyone involved, that’s damn near impossible to plot out for future seasons, and certainly not in a conference with divisions. There will be a schedule inequity every year, regardless of what methodology you use to construct that schedule — it’s unavoidable. The question we have to ask is does the SEC’s scheduling methodology introduce an inequity that systematically favors or punishes a particular team? Put another way — is this scheduling model fair?

Answering that question is the main purpose of this article, but first, here’s a link to the spreadsheet with all the data. Feel free to poke around, draw your own conclusions, etc. — there’s more in there than I’ll be talking about below. One point to note: I counted neutral site games in the Away calculation, because that’s what they are — games away from a team’s home field. That made more sense to me than calculating a separate split for neutral-site games, which for most regular seasons consists of one contest a year. If you have any other questions about the sheet or anything in it, feel free to leave a comment or send an email.

The Cross-Division Schedule and Permanent Cross-Division Rivals

As evidenced by the numerous articles on the network pertaining to the topic, the permanent cross-division rivalry system is perceived as the largest source of inequity in the current scheduling model. The complaint is generally along the lines of “Team X’s permanent rival is typically stronger than Team Y’s permanent rival, so this system makes Team X’s schedule harder than Team Y’s.” I won’t argue that, in terms of the one game on the schedule dedicated to that permanent rival is concerned, this isn't absolutely true: Team X’s permanent rivalry game is tougher than Team Y’s in this scenario. This is just one game though, right? Is that game significant enough to impact the cross-division schedule to a degree that it drives the strength of the overall SEC schedule?

If it did, you would see a relatively strong correlation between the cross-division schedule strength and the SEC schedule strength. Here’s a plot of that data from the sample[2]:

2 | Once again — 2007-2014, regular season games only, no Missouri or Texas A&M.

Looks kinda noisy, right? You’ll note I took the liberty of plotting a linear regression line there, along with the associated coefficient of determination, which from now on I’ll be referring to by its common representation, R2. With an R2 of just 1.37%, it’s highly, highly unlikely there’s any correlation to be found here. But, you can’t take R2 at face value — depending on the situation, something as small as 1.37% is statistically significant. That’s why we have hypothesis testing for statistical significance.

In this case, the appropriate test is one referred to as the regression t-test, which employs the sample size[3], the R2 value from the linear regression, and the t-distribution to determine if the correlation is statistically significant. The test is structured such that the null hypothesis is R2=0[4]; here, that’s the hypothesis that there is no correlation whatsoever between cross-division schedule strength and SEC schedule strength.

3 | 12 teams across 8 seasons yields 96 data points.

4 | With the accompanying alternate hypothesis of R2≠0.

I’ll spare you the actual math[5], but using the standard confidence level of 95%, the test came back indicating the null hypothesis could not be rejected. Translated into something useful[6], that’s saying the evidence suggests there is no statistically significant correlation between cross-division schedule strength and SEC schedule strength. If you’d like to interpret that as cross-division schedule strength hasn't mattered with respect to the SEC schedule strength over the past eight years, I’m not going to stop you, because that’s essentially what this test shows.

5 | It’s on the “Analysis” tab in the spreadsheet I provided above if you’re really interested.

6 | I understand but generally loathe the anal verbiage requirements of statistics.

Ah, but I know what you’re thinking. This is from 2007-2014, but what about since the league went to the 6-1-1 scheduling model in 2012, to accommodate our new friends in Missouri and Texas A&M? In that model, the permanent cross-division rival accounts for half of the cross-division schedule strength — it’s still just an eighth of the overall schedule, but now it’s a larger part of the cross-division component. I used the same procedure as before, but this time I included Missouri and Texas A&M into the data, since we’re only looking at those three years anyway — that yields 42 data points, as shown below:

Using the same test as above, I also found that the null hypothesis could not be rejected, which means even with the move to 6-1-1 scheduling format, there is still no statistically significant correlation between cross-division schedule strength and SEC schedule strength. Ah, but something jumps out from this chart. You see that line of green marks that’s several ticks above the rest of the pack? Those are the seven SEC West teams from 2014[7], and those data points have a profound effect on the regression. If you were to remove just those seven data points[8], all the sudden the relationship becomes significant. It’s too soon to make a call either way on this after just three years, but this is something that should be tracked moving forward. Given that it took the toughest division in recent college football history to “right the ship”, as it were, I’m inclined to say the 6-1-1 model is not going to work in the long-term, but again it’s too soon to tell.

7 | Again, that division was insane last year.

8 | Or confine the analysis to 2012 and 2013 only.

Is There Evidence of Favoritism in the Overall SEC Schedule?

We’ve established the cross-division scheduling hasn’t had a significant impact on the overall schedule — at least not until the 6-1-1 model came around, and even then it’s murky — but what about the SEC schedule overall, including intradivision matchups? Has the SEC consistently favored one team over the others during the past eight seasons[9]? Perhaps they monkeyed with the schedule in a particular year to get a particular result[10]?

9 | Dem bammerz is cheatin’, PAWWWWL!

10 | The REC paid ‘em off in 2008, PAWWWWL!

To take a stab at answering these questions, we first have to establish the hypothesis we’re trying to test: do either the team or the year have a statistically significant effect on the schedule strength over this timeframe? To those of you who have some experience in the statistical arena, the first approach that statement should suggest is Analysis of Variance, more commonly known as ANOVA. ANOVA is an offshoot of the multiple regression process that tests if the means of different groups are significantly different; in this case, it would tell us if the mean schedule strength was significantly different among the different teams or among the different years. There are three assumptions that have to be met when using ANOVA:

  • The dependent variable (schedule strength) is normally distributed in each group being compared (Team, Year),
  • The variances in each group must be homogeneous (equal), and
  • Observations must be independent and random.

The first two are no big deal — the ANOVA method is generally considered to be very robust against violation of these assumptions, and there are alternative testing methods available when the data doesn’t meet these requirements. That last one, though? That’s a showstopper. “Independence” in a statistical context means the observations do not affect one another. In some cases, that would apply here: Alabama’s schedule strength in 2008 is in no way related to Vanderbilt’s schedule strength in 2013. The problem is it absolutely would be related to Vanderbilt’s schedule in 2008, and relations like that are all over the data. Looking at just SEC schedule strength, we’re working with a set of games where the participants all play each other, such that everyone’s schedule strength is interrelated with everyone else’s. That’s highly, highly undesirable when trying to address questions like these.

That lack of independence produces a phenomenon called multicollinearity, and that basically blows ANOVA to pieces. You can check for multicollinearity using a metric called the variance inflation factor, which again I’ve done in the "Analysis" tab of the spreadsheet linked up above. The point I'm getting around to is that, as far as the braintrust[11] and I are concerned, this question can’t be answered, at least not with any statistical methods with which I’m familiar. If an epiphany happens down the line I’ll revisit it, but for now this will remain unaddressed, unfortunately.

11 | When an engineering Ph.D. candidate and a statistics teacher both tell you you’re out of luck… yeeeah...

Is Alabama’s Schedule Really That Weak?

Last time, I noted Alabama had the 9th-toughest SEC schedule over the last eight years, quite a bit lower than their cohorts in the SEC West, who took the top five spots. While I’m sure that result was unsurprising to the four or five readers from other fanbases that stumbled across the article, I was initially a bit confused, given some of the wars the Tide’s gone through lately.

Then, it dawned on me: Alabama doesn’t have to play Alabama. Everyone else has to play Alabama.

Now, that might sound like unbridled gump arrogance at first, but it’s a point worth considering. Here are the team strengths according to NS&P+ again, previously shared in Part II:

Team 2007 2008 2009 2010 2011 2012 2013 2014 Avg.
Alabama 3.1 19.8 24.0 22.9 27.5 28.5 22.2 28.3 22.04
LSU 22.6 9.9 15.5 15.0 28.7 15.4 15.9 16.5 17.44
Florida 21.8 30.6 25.0 10.4 6.4 22.4 9.7 11.6 17.24
Georgia 14.4 10.9 7.7 9.2 15.2 18.5 16.4 22.6 14.36
S. Carolina 9.6 8.4 12.9 20.0 9.8 15.8 17.5 7.9 12.74
Auburn 9.8 -1.6 9.1 23.9 4.6 -2.6 20.4 23.6 10.90
Arkansas 5.0 4.8 12.7 19.8 12.3 7.4 0.3 23.1 10.68
Tennessee 16.6 5.8 15.2 1.5 7.2 8.4 6.5 14.2 9.43
Ole Miss -3.3 13.2 6.1 2.3 -2.0 13.1 6.9 23.0 7.41
Miss. State 3.2 -7.1 7.1 10.5 2.4 6.4 13.4 17.8 6.71
Kentucky 10.6 -1.1 3.3 0.9 -6.4 -3.3 -3.4 1.5 0.26
Vanderbilt 3.5 3.2 -9.4 -8.9 11.3 3.3 -0.1 -10.9 -1.00

Alabama’s been at the top, on average, by a large margin over the last eight years. The separation with #2 LSU is roughly equivalent to the difference between LSU and #5 South Carolina, and the divide gets even wider as you move down the chart — that’s more significant than you might think when trying to put all of this into context. But how do we adjust for that? To be honest, I don’t think there’s a perfect way to do it[12], so this next bit is more for fun than anything else: what if each team in the league had a ninth game in their SEC schedule every year, an intradivision game at a neutral site against a carbon copy of themselves from that season? Well, you would get this:

12 | Struggling with these issues and the previous section are why it took so long to get you Part III, just for clarification.

Team Adj. SEC Strength
Alabama 125.71 (1)
LSU 124.14 (2)
Arkansas 122.52 (3)
Ala. Poly 120.67 (4)
Ole Miss 108.30 (6)
Miss. St. 101.95 (9)
Florida 113.15 (5)
Creamsicles 104.39 (7)
S. Carolina 103.21 (8)
Georgia 100.35 (10)
Kentucky 87.58 (11)
Vanderbilt 81.24 (12)

And there it is. The SEC West still has the toughest schedules in the league, but the intradivision order has changed quite a bit. The Tide are now on top of the heap, just a hair ahead of LSU at #2. The SEC East schools reshuffle based on team strength, with Vanderbilt now at the bottom of the pile. Again, this isn’t definitive by any stretch, but it’s something to consider the next time someone bashes Alabama’s schedule.

So we’ve shown that the cross-division scheduling has had no impact on the overall SEC schedule strength over the last eight years, and we’ve shown Alabama’s schedule isn’t as bad as you might think. As far as the former’s concerned, there’s some evidence the 6-1-1 model may generate a consistent inequity in the schedule during future seasons, but it’s too soon to tell for sure. To wrap this series up[13], we’ll be taking a look at some alternative scheduling methodologies and how they may be better suited for a 14 team SEC.

13 | Soon. Ish.