Tuesday, July 31, 2007
Admit it, you hear about it all the time, you probably talk about it all the time, and you probably think it is a very significant factor in the outcome of games.
But is it really? Does home field advantage really make a difference in the outcome of games? Oh sure, the talking heads go on and on about it, but does it really count?
Well that's a good question, and believe it or not, it's a very complex question. Some would like to paint that question in simple terms of "Team X has the loudest stadium, and it's much tougher to beat Team X in Team's X's home stadium." But, of course, things in life are rarely that simple, and the phenomenon of home field advantage is no different.
In fact, believe it or not, there is a lot of data, and I do mean a lot of data, that indicates that there really is, effectively speaking, no such thing as home field advantage. This forum will be one of the few places you hear that, but it is nevertheless true.
For example, if you break down the results in SEC games (48 SEC intra-conference games per year), there seems to be little legitimate correlation between the overall results and where the game was played. In most years, the home teams wins slightly more games than the road team (usually about one game more, a 25-23 overall record for the home team), but that's not always a given. It's pretty common for the road teams to actually have a better overall record in conference play than the home teams. In the end, it's very close one way or the other, with neither the home teams or the away teams having any real advantage, and honestly what very little difference that actually exists between the two being almost certainly statistical white noise.
Beyond that, we have further evidence.
Let's take a look at the last five years. If you had to say, which two SEC teams performed better at home, relative to their road performance, than the rest of the SEC? Surely, if a team does have a home field advantage, and if that advantage really makes a tangible impact on the outcome of games, then the teams with the biggest home field advantage would generally speaking have the best home performance relative to their road performance, right? Not quite. In fact -- get ready for a shocker -- the two SEC teams that have performed the best at home relative to their performance on the road have been -- again, get ready for a shock -- Ole Miss and Mississippi State. And correct me if I'm wrong, but you never hear anybody talking about how difficult it is to play in Vaught-Hemingway, or Davis Wade Stadium.
So what about some of the big boys of the SEC with those massive 90,000+ seat stadiums? Surely there is a big home field advantage there with those extremely loud crowds. Uh, well, not quite. Take LSU for example, and their vaunted Tiger Stadium. Believe it or not, despite all of the talk about supposedly how difficult it is to play in "Death Valley," in the five-year stretch from 2001-2005, LSU was 15-5 at home in conference play, and 15-5 on the road in conference play. At bottom, it was the same record, regardless of where they played. We'll go even deeper than that, looking at Alabama, Georgia, and Tennessee in that same five-year stretch. Despite massive stadiums, it did them little good. In fact, all three teams had a significantly better record on the road than than they did at home. If anything, for that group, playing at home was at times seemingly a death knell more than anything else.
At bottom, as a whole, big-time programs have big-time winning records at their home stadiums, compared to their road records. And the reason is simple, when you take into account all games, you take into account all of the Sisters of the Poor that a team will play, and those patsies greatly inflate the overall home record (even we went 6-2 at home in 2006, for example). And obviously, since you don't play home-and-home series with the Sisters of the Poor, you never play them on the road to where you greatly inflate that record too, which causes the disparity between the overall home and away records. So, as expected, once you filter out all of the Sisters of the Poor, that inflated record goes away. At bottom, you generally win fewer games against better teams, and more games against poorer teams, regardless of where you play the game itself. For example, we all know the argument from the 2006 LSU season. They went 4-0 at home in SEC play, but 2-2 on the road, and the argument was that had those four road games been played in Tiger Stadium, they would have won them all. In reality though, that argument doesn't hold water. The disparities between the two records did not result from the location of the games, but the overall quality of the opponents faced. The four home games included three teams with losing records, and the overall record of those four teams was a measly 21-28 (42.8%). On the other hand, the four road games included the eventual national champion, a top ten team, a top fifteen team, and also a team that went 9-4 and finished in the top 25. All told, the overall record of those four teams was 43-11 (79.6% winning percentage). Had the schedule been reversed, it's very likely they would have went 2-2 at home and 4-0 on the road. Again, it wasn't the location of the games that made the difference, it was the disparity in the quality of opponents that mattered. The point of the matter is that when playing similar teams (such as conference games), teams tend to win more games against lesser opponents, and win fewer games against better opponents, regardless of whether the game is played at home or on the road.
Going even deeper than that, you often here that home field advantage really makes a difference in close games. So is that valid or not? In a word, no. Brian C. Fremeau, who publishes a college football rating system called the Fremeau Efficiency Index, did the research for a November 9th, 2006 column at Football Outsiders. In it, he compiled a list of every Division 1-A college football game that was decided by seven points or less, and he came to the conclusion that 172 games fell into that category in the 2006 regular season. Want to guess the home record and the road record? All told, in the 172 games, home teams went 86-86 (.500), and road teams went 86-86 (.500).
Fremeau later summed it up best: "Home-field advantage may be an emotional factor, but it does not appear to be a significant statistical one."
In fact, several coaches have indicated that they feel it to be advantageous to play on the road. That may sound odd at first because you hear so much about home field advantage, but there's a legitimate rationale behind it. Pat Dye, arguably, is that viewpoint's most prominent proponent, and he long stated his belief that playing on the road presented a more advantageous situation. According to Dye, teams are more focused on the road due to fewer off-field distractions (especially the day before the game and the day of the game), and also going into a hostile environment forces teams to band together as, well, a team, which though not quantifiable, is an important factor in winning football games.
Again, that probably sounds odd because you never hear it, but nonetheless it is a pretty solid rationale.
At the end of the day, the only real support for home field advantage seems to be what people say. In other words, support for the existence and legitimacy of home field advantage usually amounts to nothing more than unqualified verbal and written speculation from fans and / or questionably-qualified "experts" like the Lee Corsos of the world.
But why does everyone so badly want to believe in home field advantage? I say two main reasons.
For one, teams that lose games on the road suddenly have a convenient excuse for the loss. You hear it all of the time, as we mentioned earlier. Though it's a weak-at-best argument (it doesn't matter the least bit the speculative outcome of a game when played somewhere else, all that matters is the outcome of the game where it is played), fans cling to it passionately, even to the point of absurdity. Again with the 2006 LSU team, even most LSU fans continue to cling to the notion that they would have beaten Florida had the game been in Baton Rouge instead of Gainesville. Only one problem with that: the game wasn't even close. Florida took a 23-7 lead early in the third quarter, and from there sit on it. Only did a long LSU field goal four minutes into the fourth quarter bring the game within two touchdowns, and they never got closer. Truth is, even the people who think home field advantage has a massive impact on the outcome of games would have easily said it would never have made that much difference in a game like that. And keep in mind I'm not trying to pick on the LSU people here; if it is being read that way then that is incorrect. This type of rationale is common among all college football fans, unfortunately. At bottom, people will cling to the notion that "Oh yeah, well we would have won had it been at our stadium" (as if that doesn't sound like a middle school argument), even in the fact of sheer absurdity.
Two, you have to consider this: If home field advantage is legitimate and significant, a fan literally is part of a win. The fan literally plays a role in his team winning, and in that sense he's essentially like a player on the team; it produces the feeling that, hell, the common fan may as well have caught the game-winning touchdown pass. And of course, fans desperately want to believe that. They want to believe that their actions have a very tangible, very real, and very significant positive impact on their favorite team, even if there is no real evidence indicating anything of the sort. No one wants to believe that they pay ungodly amounts of money and yell and scream in support of their favorite team for it to have essentially no impact. Fans want to believe the former notion.
But wanting to believe something doesn't make that something true. Nor does everyone talking about it, even if it ESPN and the like doing the talking.
At bottom, despite vehement assertions that home field advantage is real and significant, there is simply little to no actual empirical evidence that there is any such thing as a home field advantage in college football, and what little "advantage" could possibly be rationally construed from the data seems to be so small that it is essentially insignificant in statistical terms. Just because something is constantly repeated does not make it true, and home field advantage in college football seems to be a good real-world example of that.
Again, Fremeau summed it up perfectly. Home field advantage may represent a significant emotional attachment to the fans, but in an empirical real world sense, it seemingly has simply has no real, legitimate, or significant impact on the outcome of college football games.
In reality though, considering it is all one defense, the two things are related, or are they? That's basically what I set out to determine. At bottom, is there any correlation between the number of sacks a team generates and how well their pass defense performs? Or does that number of sacks created have little if any correlation to how well a pass defense performs, as Joe Kines contended?
To figure it out, I ran the numbers on how well Adjusted Sacks correlated to a variety of different measures of performance for a pass defense, including opposing quarterback's average passer rating, yards per completion, yards per attempt, completion percentage, interception rate, and touchdown rate.
Before I list the results of what I found, I'll give a very brief primer on correlation for those who may not be familiar with the concept. At bottom, correlation ranges from 1 to -1, and values that that are near 1 or -1 have correlation, and values near 0 essentially have no correlation. Going up to 1 means there is positive correlation, i.e. the more x happens, the more y happens. Going down to -1 means there is negative correlation, i.e. the more x happens, the less y happens.
So, with that in mind, the following is how well Adjusted Sacks correlated to that variety of measures, in descending order:
Opposing QB average passer rating: -.4997
Touchdown rate: -.4693
Completion percentage: -.4660
Yards per attempt: -.4489
Yards per completion: -.3087
Interception rate: -.2573
As you see, opposing team's passing numbers generally tend to go down greatly as the number of Adjusted Sacks increase.
Opposing quarterback average passer rating, touchdown rate, completion percentage, and yards per attempt all have a pretty high negative correlation with the number of Adjusted Sacks. Again, there's just no real way to dispute the data, for the most part. Unless it's just an odd-ball deal with a one year sample size, which I highly doubt given the breadth of the statistics, it generally just showcases that pressuring the quarterback pays off.
I was thinking, perhaps, that would I might find would be a good bit of teams that racked up a lot of Adjusted Sacks did so by blitzing heavily, and that would even things out in the end with that defense giving up a lot of big plays and completions. But, as the breakdown showed, that simply wasn't the case. Certainly some teams did get a high number of Adjusted Sacks by blitzing heavily, but they didn't seemingly give up very many big plays or lot of completions. I suppose it isn't quite like the old Joe Lee Dunn days at MSU, where two sacks would be followed by a 38-yard completion against a seven man rush. You have to give credit to defensive coordinators and other defensive coaches; generally, it seems, even when they bring lots of pressure, the defensive backfield still, relatively speaking, prevents big plays.
The only snafu to the entire analysis was Interception Rate. There was a -.2573 correlation between Adjusted Sacks and Interception Rate, meaning that as Adjusted Sacks went up, the Interception rate tended to go down. Perhaps this should have been too shocking, especially considering the result of LSU, Arkansas, and Alabama, but it still is just a bit. You would think, intuitively, that more pressure would result in more bad throws and more interceptions, but the reverse is true. I suppose that the man-to-man coverage on the outside, combined with the shorter throws when the blitz is expected, means that defensive backs are not in as good of a position to intercept the football. And moreover, again since they are in man-to-man coverage, cornerbacks may not want to be as aggressive, knowing that they would have no help over the top if they made the slightest mistake. I know, it doesn't seem right intuitively, but more sacks correlates with fewer interceptions, at least in this data set. I would point out, however, that a correlation of only .2573 isn't particularly high, so maybe it is just statistical white noise that would work itself with a larger sample size. I suppose we'll see when we run the same numbers this time next year.
At the end of the day, you just have to be brutally honest about it. At bottom, it seems that if you were going to be successful in pass defense, you had to get after the quarterback. If you didn't, you just weren't going to be very good in defending the pass.
As for Kines, I certainly don't want to criticize his philosophy, because obviously he's forgotten more about football than I will ever know, but the numbers just seem to invalidate his defensive philosophy. Kines essentially betted that we could drop seven and eight defenders into heavy zone coverage, and opposing quarterbacks wouldn't be able to beat it. Unfortunately, at least in 2006 (that philosophy worked with all of the talent we had in 2005), that didn't work. Quarterbacks did effectively pick apart the defense with a relatively high degree of regularity, and with that in mind we struggled greatly in terms of pass defense. And, honestly, a lot of that credit goes to opposing quarterbacks. Perhaps there was a time when you could count on opposing quarterbacks to make a lot of mistakes throwing into heavy zone coverage, but that time seems to have passed.
The good news is that with Saban in charge now, Adjusted Sacks will go way up, and that should drive opposing team's passing numbers considerably lower.
Monday, July 30, 2007
On the surface, that may seem like the textbook definition of a minor, insignificant change. But you would be very wrong.
In fact, Kentucky head coach Rich Brooks was quoted as saying, "It’s going to be one of the most significant rule changes to come about in recent years, maybe in a decade in college football."
So how exactly will it change things? Well, Coach Brooks had something to say on that as well. Specifically, he had the following musings:
"You’re gonna see offenses starting with a lot better field position. You’re gonna see scoring averages go up because of this rule change. You’re gonna see a lot more gimmicks on kickoff coverage. By gimmicks I’m talking with pooch kicking, possible squib kicking. There may be some people that decide they want to kick it out of bounds and give it to the team on the 35-yard line rather than kicking it deep and having a return out to the 40 or 45."Florida Coach Urban Meyer later agreed with Rich Brooks' comments. In fact, he specifically mentioned a study that has concluded that the average kick-off will now be fielded at the nine-yard line.
Georgia Coach Mark Richt even went as far as stating that last year his team returned approximately 25 per cent of all kick-offs, but that now he expects his team to return somewhere between 75 and 90 per cent of all kick-offs.
The are no two ways about it. This rule change may seem minor on the surface, but it is not. This is a massive rule change that will have a massive effect.
So exactly what will be the end result?
Well, I don't know for certain, of course, but we can speculate to a great degree about what the effect be.
For starters, at Mark Richt noted, a much higher percentage of kicks will be returned in 2007. In fact, it will be very much a rarity for a kick to not be returned, and honestly it will still even be a rare occurrence when kicks are not returned even when a great distance kicker is handling the kick-off duties.
Moreover, the starting field position is, of course, going to be much higher. Considering historically the average kick return generally hovers somewhere between twenty and twenty-five yards per return -- if the Meyer-referenced study is correct in their projection that the average kick will be fielded at the nine-yard line -- the average starting position for teams will be somewhere between the twenty-nine and thirty-four yard lines. And again, that's just the average. Teams that struggle in kick coverage and kick distance will routinely see opponent's starting beyond their own forty-yard line.
The better field position, of course, will also greatly increase the chances of scoring. It goes without saying that the further advanced the ball is, the greater the odds of scoring are. For example, take this into consideration: When the NFL kicked off from the 35-yard line, teams that got the ball first in overtime won 50% of the time, and teams that kicked won 50% of the time. When they moved the kick-off back five yards to the 30-yard line, however, it made a big difference. Since then, teams that get the ball first in overtime win 60% of the time, and teams that kicked won only 40% of the time. Again, moving the ball five yards further back can make a massive difference in the chances that the receiving team will score on the ensuing possession because the odds are scoring are much higher due to better field position.
And, of course, the greater chances of scoring on each drive following a kick-off will result in higher scoring games. In 2006, the average SEC team scored 20.6 points per game, and that number is likely to creep up towards, if not beyond, 22 points per game in 2007.
Coaches, as alluded to earlier by Rich Brooks, will compensate in a variety of ways. To begin with, we'll see more pooch kicks and squib kicks, and we'll see much more complex kick cover schemes in an attempt to limit returns. In the past, kick cover schemes have often times been pretty vanilla, but that is likely a thing of the past now. Beyond that, you are almost certainly going to be see much better players on special teams. In the past, many coaches have opted for lesser players (often freshmen and others buried on the depth chart) when composing their special teams unit, but 2007 will see, generally speaking, much higher quality players on both the kick coverage and kick return teams. The much higher potential reward / potential loss simply dictates that coaches will be forced to take more risk in that sense.
Of course, having better players on special teams will impact the rest of the game in yet another big way: injuries. It's no secret that the two most dangerous plays in all of football are the kick-off and the punt (massive men hitting each other at top speed, going in opposite directions, you do the math). And with much better players on special teams, it's inevitable that those much better players will occasionally get hurt. Again, that could make a huge difference when, for example, a star player who before 2007 wouldn't have even been playing special teams goes down for the year with a torn ACL on a kick-off. Say, for a more concrete example, Demeco Ryans would have broken his ankle covering a kick against Southern Miss in 2005, when he would have otherwise been sippin' Gatorade on the sideline; exactly how do you think that would have impacted our season? I'm pretty sure you get my point now. The injury impact of the move could be very significant, and likely will be very significant for the unfortunate few teams that lose a key player in the process.
All told, it's going to greatly emphasize both kick coverage and kick returns. Both coaches and players are going to focus greatly on kick coverage and kick returns, and you'll quickly see the difference when you watch games either on television or at the stadium.
The bottom line is this: teams that can cover kicks and return kicks well will be in great shape, and teams that can't cover kicks very well or return kicks very well are going to find themselves losing a lot of football games.
This is a major rule change, and it will have a massive overall impact. How well each team adjusts its performance to the rule change will likely have a significant impact on the team's final win-loss record. At bottom, if you are looking for a good season in 2007, you had better get your kick-off team in tip-top shape, or it's going to be a tough road to hoe.
As was the case with total offensive sacks allowed, defensive sacks created on its own is a pretty meaningless number, and only takes on real meaning when you put it into the context of pass attempts. Long story short, a team that piles up a lot of sacks can be a relatively poor pass rushing team, and a team that has only a relatively few sacks can in fact be a good pass rushing team, depending on how many passes they have thrown against them.
So, what I have done is taken the total number of sacks created by a particular team in SEC play in 2006, and divided that by the total number of pass attempts against the same team. That division yields a percentage of passing plays that I call Adjusted Sack Rate, which of course is the percentage of passes that resulted in a sack.
The following is how the SEC ranked in terms of sacks in 2006:
Moreover, we'll break things down a bit further. I understand that Adjusted Sack Rate can be a bit hard to grasp, being a raw percentage and all, so I also added Adjusted Sacks. Basically, Adjusted Sacks is the number of sacks that a team would have gotten had they faced the league average of pass attempts (223). Again, it doesn't change the data at all, it just makes it a bit easier to comprehend and fully understand by putting it in concepts more familiar to football fans.
Also, I've looked at teams not just by sacks alone, but also the lost yardage from those sacks. As you'll see, I've ranked the SEC in terms of the average yards lost per sack as well.
Without further adieu...
So there it is. If you've ever wanted better statistics regarding sacks in 2007, now you have them.
Honestly, I really don't have a whole lot to say regarding the data itself, and the main reason I did this was to get data needed for a later post.
Why do you ask?
At bottom, a sack is a pretty meaningless thing, in and of itself. You often times see teams that have a lot of sacks because they consistently blitzed everything but the kitchen sink, and while that did accumulate a lot of sacks, it still did not mark solid defensive play. Opposing teams would know their heavy blitz tendencies, and game-plan accordingly with a lot of max protect packages and short drops, with the end result being lots of completions and many big plays when a defensive back, playing man-to-man coverage, would miss a tackle and there would be no one over the top to clean it up. So really the reason I did all this is to compare it (hopefully in a post coming shortly) to 2006 SEC pass defense results. Again, hopefully that post will be coming shortly.
But I will say a few things...
Namely, Alabama just didn't get it done. The pass rush was a little better than most would have thought (eleventh instead of dead last), but there wasn't much to it. As I've posted before, the underlying idea behind the Kines scheme with regard to pass defense was to drop a lot of defenders into coverage, usually only rushing three or four defenders, and then forcing the opposing quarterback to throw into heavy zone coverage. And, well, that's exactly what we did. It just didn't work. We didn't get very many sacks (as expected, even by Kines), but we also didn't force incompletions with the heavy zone coverage. Again, at bottom, it just didn't work. You really can't put it any way other than that.
Georgia does need to be mentioned. The Dawgs finished third in the conference in total sacks, which sounds about right, considering their two outstanding defensive ends Quentin Moses and Charles Johnson. However, once you analyze the Dawgs through Adjusted Sack Rate, they only finished fifth in the conference, and really just looked like a very mediocre pass rushing team. Honestly, that shocked me. I fully expected Georgia to be one of the, if not the, top pass rushing teams in the conference. With Moses and Johnson gone to the NFL, and Georgia having to replace eight starters on defense in 2007, perhaps there are some signs for concern in Athens. All told, the Dawgs were the textbook example of racking up a lot of sacks against the Sisters of the Poor, racking up ten sacks against the, shall we say, vaunted triumvirate of Western Kentucky, Colorado, and UAB (a 6-5 Division 1-AA opponent, and two D-1A opponents with a combined record of 5-19. If you ever wanted a case study of why I only look at conference games, this would be a good one.
The rest really aren't too shocking. The teams that you thought rushed the quarterback really well (LSU, Florida, etc.) in fact did so, and the teams that you thought rushed the quarterback poorly in fact did so, too.
The only other mild surprise, to me, was Tennessee. The Vols are always very talented in the front seven, and I expected them to be a pretty solid pass rushing team, and they were far from it. For whatever reason, the Vols struggled to get after the quarterback, finishing tenth in the SEC in Adjusted Sack Rate, and had only about two Adjusted Sacks more than Alabama and Ole Miss. Yikes for the Vols. Not what I expected.
Finally, let's look at average yards lost per sack. Arkansas, LSU, and Florida (i.e. three of the top four teams in Adjusted Sack Rate), finished 9th, 10th, and 12th, respectively, in terms of average yards lost per sack. At bottom, teams knew they had a great pass rush, and planned shorter drops accordingly. And, of course, shorter drops equals less yardage lost when in fact a sack does occur. On the other end of the spectrum, the two teams that finished first and second (Alabama and Tennessee) in average yards lost per sack were about the two worst pass rushing teams in the conference. Opponents saw the lack of a pass rush, and planned deeper drops and longer routes, accordingly. That, in turn, resulted in a higher amount of yards lost per sack, when in fact a sack did occur.
Later we'll put all of this together with 2006 pass defense and see what we find. It should be interesting stuff.
Sunday, July 29, 2007
Today, we'll move from a statistic that focuses mainly on the defensive side of the ball to a statistic that focuses mainly on the offensive side of the ball: the Scoreability Index. Whereas the Bendability Index determines how many yards it took an opponent to accumulate to score a point on a team, the Scoreability Index determines how many yards it takes an offense to score a point on their opponent. Just like was the case with the Bendability Index, the Scoreability Index is not solely a measure of offensive prowess, but instead a measure of overall team efficiency that takes into account a variety of factors, such as red zone offense, special teams performance, defensive performance, and turnovers.
The following is how the SEC finished up in terms of the Scoreability Index in 2006:
Now let's look at things closer, beginning with my beloved Crimson Tide.
Unfortunately, well, it's just not pretty for those clad in crimson. As you can see, we finished eleventh in the conference in the Scoreability Index, and combined with us finishing eleventh in the Bendability Index, well, no one should be surprised that we went 6-7 and Mike Shula was fired. In terms of the Scoreability Index, the offense moved the ball relatively well. We racked up 2,691 yard of total offense -- granted, keep in mind I understand that looking at things by total yardage alone doesn't tell you much, which is why we're going farther in this analysis -- and moreover we were very consistent in racking up yardage. We weren't boom and bust (400 one week, 150 the next), we consistently put up over 300 yards almost each and every week (the only two exceptions were Tennessee and MSU). But unfortunately, those yards didn't really translate into very many points.
At bottom, though we moved the ball relatively well, it didn't translate into points. Poor special teams (all across the board, in terms of punt returns, kick returns, punting, kicking, and punt and kick coverage), and below average defensive play meant that we generally didn't have great field position, so we had to move the ball long distances to score. Beyond that, once we did move the ball downfield, we bogged down in the red zone, so we never saw the full potential point value of all of the yards that we had accumulated. For example, moving the ball 74 yards isn't very meaningful if you can only get three points on the board when it is all said and done.
At the end of the day, it was just bad from top-to-bottom. There are no real bright spots to the matter.
Now just a few thoughts on the rest of the SEC...
Vanderbilt was the only team that finished behind Alabama, and in reality we should have been behind even them. Vanderbilt's yards-per-point ratio was skewed massively by the fact that in one game (Kentucky) they put up 621 yards of offense. For the other seven games, we were a good bit worse than even the Commodores. Bad stuff.
Arkansas just set the world on fire in this category, needing only 11.5 yards per point. You have to give the Hogs credit, they were good from top to bottom. They played great defense, capitalized on turnovers, had good special teams, and ran all over just about everyone when they actually got the ball on offense. What can you say? They were the epitome of how it is supposed to be done.
Auburn did not have a very good offense in 2006, but they did quite well in the Scoreability Index, finishing third in the conference. Again, they weren't very good on offense, mainly due to injuries to Brandon Cox and Kenny Irons. However, they still did the things, as an entire team, needed to be successful. They got great special teams play, and great play from their defense. Though their offense wasn't particularly good, it was efficient and capitalized quite well off of what their special teams and defense created for them.
And finally, to close, one of the keys to always look at with any statistic is how well that particular statistic correlates to winning or losing football games. Some stats -- despite people talking about them all the time and you hearing it talked about on ESPN 24/7 -- have essentially no correlation to winning or losing football games, but that is not true of this statistic. Much like the Bendability Index, the Scoreability Index has a very high correlation to actual wins. Just look at the teams that finished in the top five in the Scoreability Index versus those who finished in the bottom five. The teams that finished in the top five of the Scoreability Index were a combined 54-13 (80.59 percent winning percentage). The teams that finished in the bottom five of the Scoreability Index were a combined 28-33 (45.9 percent winning percentage).
Small wonder Cold, Hard Football Facts -- the inventor of this statistic -- refers to it as one of the "Stats That Matter."
I recently ran across an article, however, that detailed the official SEC champion selection at SEC Media Days, and how they have historically performed.
According to this article in the The Advocate, only twice has the media successfully predicted the eventual SEC champion in the fifteen year history of the SEC since the implementation of the divisional format in 1992. The media correctly predicted the eventual SEC champion in 1994 and 1995, both times when they selected the Florida Gators. In the other thirteen years, the media has been dead wrong.
So, in reality, the "official" SEC favorite hasn't actually won the SEC in the eleven seasons, going 0-11 in the process.
Interestingly enough, teams not picked to win the SEC include five national champions (1992 Alabama, 1996 Florida, 1998 Tennessee, 2003 LSU, and 2006 Florida), and one team that went 13-0 despite being jilted at the ballot box (2004 Auburn).
I can't help but wonder if being the favorite makes teams prepare for you harder, and play harder against you, thus almost wholly ending your chances of the winning the SEC? And perhaps that same phenomenon allows other teams to sneak up a bit and get the job done? Right or wrong, it seems something is amiss. I understand the media isn't the most football savvy group in the world, but you could probably do better than 2-13 just by throwing darts at a dartboard.
But alas, back to the original point at the hand, if LSU can pull it off this year, you'll have to give the Tigers and Les Miles all the credit in the world. They will truly have pulled off a very rare feat.
But what we can do is take the combined statistics (attempts, completions, yards, touchdowns, and interceptions) of every SEC pass defense and convert them into one simple, easy-to-understand formula: the quarterback passer rating.
Granted, the quarterback passer rating has some flaws, but generally it presents a good overall summary of performance.
Thus, here you go, the 2006 SEC pass defenses ranked in terms of opposing quarterback's cumulative passer rating:
And that sums things up quite nicely.
As you can see, the Crimson Tide ranked ninth in the SEC in terms of opposing quarterback's cumulative passer rating. At bottom, we looked good on total pass defense (4th in the conference) because we allowed so few passing yards. But the dirty little secret behind that was we saw the fewest passes thrown against us of any team in the conference. In reality, we were a quite poor team in terms of pass defense, better than only lowly Mississippi State, Kentucky, and Vanderbilt.
Saturday, July 28, 2007
Just look at the past nine seasons.
In 2006, everyone and their brother thought the Auburn Tigers had it locked up. And then the season started, and they finished second in their own division. 2005 had no true favorite (though Tennessee would have likely came closest), for the most part, but the team that did win it, the Georgia Bulldogs, were picked by effectively no one after they lost media darlings David Greene and David Pollack to graduation the year before. Tennessee, who was likely the main favorite, went 5-6, and a purge of their coaching ranks ensued.
In 2004, many expected Georgia to rebound for the SEC title, and some others expected Florida to break through in Zook's third year. As it happened, Auburn came out of nowhere and went 13-0 en route to an SEC championship. In 2003, Auburn was everyone's favorite, but they crashed and burned to a five-loss season in which head coach Tommy Tuberville came this close to getting fired. No one expected LSU to do it, coming off of an 8-5 season in 2002, but they did just that, winning the SEC and a share of the national championship.
In 2002, Tennessee, with Casey Clausen returning, was dubbed by most as the eventual SEC Champions. The Vols imploded amongst team dissension, and the Georgia Bulldogs came out of nowhere to go 13-1, win the SEC, and go on to a victory in the Sugar Bowl over Florida State. In 2001, the Florida Gators were pegged to win the SEC, and possibly the national championship. But the Gators fell to Auburn and Tennessee, and didn't even win the SEC East. LSU, the actual 2001 SEC Champion, shocked everyone by pulling it out. No one even thought they would win it even when they advanced to the SEC Title Game against Tennessee.
In 2000, our beloved Crimson Tide was the unanimous favorite, and uh, yeah, we all know how that one turned out. Florida ended up as the SEC champions, facing little resistance on their way to Atlanta or in Atlanta against Auburn. In 1999, everyone had either Tennessee, Georgia, or Florida as the SEC Champions. As it turned out, Alabama came out of nowhere and beat Florida twice, winning the SEC in a route of the Gators in Atlanta.
In 1998, the Florida Gators were dubbed by most to win the SEC after the departure of Peyton Manning, but surprising Tennessee -- led by Tee Martin -- came out of nowhere and went undefeated, winning the SEC en route to a national championship.
Finally, you have to go all the way back to 1997 until the favorite went on to win the SEC. That year, after Peyton Manning opted to forgo the NFL Draft and return for his senior season. The media picked up on it, and dubbed Tennessee as the SEC Champion. Though the Vols were blown out by Florida early in the year in the Swamp, they made the SEC title game when Florida surprisingly lost to two inferior teams (LSU and Georgia), and narrowly edged out Auburn 30-29 in the SEC Championship Game.
In the nine seasons since the Vols' SEC Championship in 1997, the preseason SEC favorite has combined to go 0-9 in their pursuit of the SEC title. More damning, however, is this fact: Of those nine favorites to win SEC, only one time did anyone remotely close to being considered a favorite has actually advanced to the SEC title game (the 1999 Florida Gators), and even the one team that did came in a year when there was no clear-cut favorite.
To sum up -- and this is something you are probably only going to read here regarding media favorites -- the harsh truth of the matter is that none of the past nine favorites to win the SEC have actually won it, and only one time did a favorite even make it to the SEC Championship game.
The perilous road to a championship doesn't simply end there, however.
It is an almost equally perilous route for those attempting to win the SEC with a first-year starter at quarterback.
Just look at the past eight seasons. In the past eight years, every team that has won the SEC championship has done so with a returning starter at quarterback.
In 2006, Florida won the SEC and later the national championship under the leadership of Chris Leak, a four year starter playing as a senior. In 2005, Georgia won the SEC with D.J. Shockley, who had started several games in his career in Athens, and had generally split playing time with David Greene.
In 2004, Auburn won the SEC with Jason Campbell under center. Campbell was a fifth-year senior who had started nearly his entire career. 2003 saw the LSU Tigers win the SEC with Matt Mauck, who had started the previous year, and who was older than most college quarterbacks (Mauck was 24, due to a minor league baseball career).
In 2002, David Greene led Georgia to its first SEC championship in twenty years in his second full season as a starter. In 2001, Rohan Davey led LSU to the SEC championship as a senior who had started off-and-on his entire career at LSU (he generally split starts from 1999-2000 with Josh Booty). Finally, Alabama won the SEC in 1999 with Andrew Zow in his second full season as a starter.
You have to go all the way back to the 1998 season to find a team that won the SEC with a first-year starter. The Tennessee Volunteers did it that year with Tee Martin, and no one has done it since.
So what does all of it mean?
At bottom, it means that the SEC is an incredibly tough conference with an amazingly high level of parity. No matter how good one team is thought to be, the truth is that the conference is just so strong from top-to-bottom that the odds are very much against even the favorite winning the SEC. And, moreover, if you generally want to have any real chance of winning the SEC, you had better have a quarterback with at least a year of starting experience. Apparently this league is generally just entirely too tough to throw a first-year starter to the wolves and expect a champion to emerge.
As LSU embarks on its 2007 campaign, they will do so as the undisputed SEC favorite, and also with a first-year starter at quarterback: Matt Flynn. If the Tigers can win the SEC this season, they will not only have out-fought eleven opponents in a brutally tough conference, but they will have out-fought history as well.
Melvin Ray, one of the top wide receiver prospects in the country, has committed to Alabama. Ray is a 6'3, 202 pound receiver out of Tallahassee, Florida. He was one of the most widely sought-after recruits in the nation, with offers from the likes of Alabama, Florida State, Florida, LSU, Georgia, Clemson, and South Carolina. Now don't think this recruiting battle is over just yet. Ray is from Tallahassee, and admittedly he and his father have always been big FSU fans. And while he says he would not have committed now if he thought he would back out later, he also did say that he would take a visit (I assume an official visit in December or January) to Florida State. It's not over yet, but it's just the point of the matter that Saban is going out-of-state and freely reeling in such great prospects.
What can you say? Nick Saban is certainly earning that four million dollar paycheck on the recruiting trails. I've said it before and I'll say it again, this class will, as a worst-case scenario, be a top ten class, and has a great shot at being a top five class.
Many of the issues that we analyze around here can largely be addressed with a massive upgrade in the talent level of the entire roster, and Nick Saban seems to be doing just that.
The following is how the entire SEC ranked in Bendability Index:
All told, I wouldn't really say there are any major surprises. The teams generally stacked up about like you thought they would have, with a few exceptions here and there.
One point, though, that should be made regards Georgia. Though the Dawgs had one of the best defenses in the conference -- if not the nation -- in 2006, they were very low in terms of Bendability Index, ranking tenth in the conference by that measure. The problem was a struggling Georgia offense that often times committed devastating turnovers. Again, as said earlier, the Bendability Index is not solely a measure of defensive prowess, but of overall team efficiency. The 2006 Georgia Bulldogs proved that point quite nicely.
So just what is the Bendability Index?
Cold, Hard Football Facts -- insofar as I can tell -- is the inventor of this statistic, and they have this to say about it:
This is the first stat that chronicles the phenomenon of the "bend but don't break" defense and provides a measure of defensive efficiency. The Bendability Index is obtained by dividing a team's total yards allowed by total points allowed, yielding Yards Per Point allowed. A team that ranks high on the Bendability Index has the defense that opponents must work hardest to score uponIn other words, you literally have to do more to score points against teams that rank high in the Bendability Index. You have to gain more yards, move the ball farther, pick up more first downs, etc.
But is the Bendability Index merely a measure of defensive prowess? No. Again quoting Cold, Hard Football Facts:
The Bendability Index is not purely a defensive yardstick. It is, instead, a great barometer of team success, of overall team strength and efficiency. It is a function of many team-wide factors, including general defensive strength, special teams proficiency, turnover differential, and Red Zone defense.Though the Bendability Index measures defensive prowess, as noted earlier, it also indicates overall team strength. Poor play from the offensive and special teams (namely offensive turnovers, and poor punting and kicking) also have a massive impact on the number of points allowed. So it's not just a measure of defensive performance, but of the efficiency of your team as a whole. Never forget, it's not just defenses that give up points. Teams, as a whole, give up points.
Moreover, it's a highly relevant statistic. Double Extra Point -- a blog similar to this one dedicated to Nebraska football -- crunched the numbers and found that the Bendability Index correlated with winning significantly higher than total defense, pass efficiency defense, run defense, third down efficiency defense, and fourth down efficiency defense. The only thing that correlated higher with winning than the Bendability Index was scoring defense, and it did so only very slightly. In the big picture, the Bendability Index may even be a more important statistic than even scoring defense, as was noted by Cold, Hard Football Facts.
So how did my beloved Crimson Tide stack up in terms of Bendability Index in 2006?
Not well at all, unfortunately.
All told, when you run the numbers based on conference play, we finished 11th in the conference in the Bendability Index, ahead of only Mississippi State. At the end of the day, opposing teams generated only 13.39 yards per point scored. By comparison, Florida led the conference, as opposing teams generated over 19 yards per point scored. For another comparison, our 2005 team forced opposing teams to generate 23.75 yards per point scored (I don't have the full SEC data for the 2005, but I'm sure that led the league by a mile).
Certainly that is a sign of a poor play in 2006 by our defense. The loss of the stars of the 2005 defense, being brutally honest, hurt much more than anyone expected. I'm not even sure rival Alabama haters expected the drop-off that we ultimately saw.
But, again, it's not just the defense. It's a measure of overall team efficiency, or in case in 2006, a lack thereof. The Alabama offense turned the ball over entirely too much, and in some cases those turnovers led directly to points. Take Quentin Culbertson's interception of John Parker Wilson returned for a touchdown in the Mississippi State game for example, the Bulldogs put seven points on the board with the offense sipping Gatorade on the bench. But it was also indicative of other problems, particularly on special teams. No one needs to be told special teams were atrocious in 2006. We finished near the bottom in punting, kick-offs were generally short, and we generally allowed big returns. All told, it all combined for opposing teams to have short fields, if not points directly, and that made it much easier to score points by having to move the ball a much shorter distance to get the ball either into the end zone and / or between the uprights.
And, really, that's about all you can say. Unfortunately, it's just another statistic showing just how bad we were in 2006. Hopefully that will change dramatically in 2007 with the new coaching regime.
Either way, the Bendability Index is certainly something you should be keeping your eye on.
Thursday, July 26, 2007
At bottom, how should snaps be distributed between starters and back-ups?
Well, the simple answer is that the proper snap distribution depends on a variety of factors, and the distribution itself changes based on the unique circumstances of situation in question.
To begin with, there are no hard and fast answers to the question. There's just no way you can really run the data and somehow conclude that starters should play x percentage of snaps and that back-ups should play z percentage of snaps. Something like that would require the situation to be very black and white, but in reality it's all one massive gray area.
Generally speaking, though, you would like to give back-ups a good bit of meaningful playing time. Doing so would not only build for the future, but it would also create a more experienced squad with more quality depth, and as we all know, quality depth is largely what wins football games. However, though you would like to do that, the harsh reality is doing so is simply not a feasible option in most cases. Conference games (and all "big" games, for that matter) are generally always close, even for bad teams (look at our 2006 season), and you simply cannot afford to not have your best players on the field the overwhelming majority of the time. The games are just too close and the end result could completely change with just one play, so you must play your best players (i.e. starters) as much as you possibly can.
Beyond that, the snap distribution between the starters and the back-ups should also depend on the disparity of the talent levels between the two groups. If you have great starters and poor back-ups, it only makes sense to give the overwhelming majority of meaningful snaps to the starters; honestly, you may as well play them 'til they drop, because even in that scenario they are still likely your best players. On the other hand, if the talent level between the starters and back-ups is closer (say, Shaud Williams and Santonio Beard, as we saw in 2002) then you should alter your strategy to split the snaps more equally. Doing so limits the odds of one player getting injured, and the lesser physical pounding keeps them fresher, healthier, etc., and overall more productive in the long-run.
A sizable lead, too, can alter the ideal strategy. If you find yourself in an unusual number of blowout games (regardless of which end you are on), the ideal strategy would be to play the back-ups more. If you are blowing a team out (say, for example, like LSU blew out Kentucky 49-0 a year ago), you should bench the starters and play the back-ups. Doing so limits the risk your starters will suffer injuries, and moreover it will also help, albeit slightly, increase experience and quality depth. On the other hand, if you are getting annihilated, you should give the back-ups more time. Just as playing starters when you are ahead by a wide margin is a poor idea, so is playing them when you are behind by a wide margin. By that point, considering it is likely later in the game, a win is impossible, and all you will do by playing starters is stretch the game out and increase the risk of injury for your starters. You may as well give the back-ups some quality snaps to improve your overall team, and shield your starters from potential injuries.
The position of a particular player can also change the snap distribution. At one extreme, the starting quarterback is generally going to play every meaningful snap, as long he remains remotely healthy. At other extremes (generally interior defensive linemen), players rotate almost continuously, with back-ups often getting as many snaps as starters.
At bottom, despite the extreme importance of finding the right snap distribution between starters and back-ups, it's a complicated question with no clear-cut answers. There are a literal ton of factors that could influence the ideal snap distribution strategy, and figuring out the ideal snap distribution strategy in any game is a very complex matter. It's yet another gray area in which good coaching can come into play and make a massive difference in the overall performance of a team.
And finally, to close, never think that proper snap distribution is not important. No, we can't quantify it, and we can't put those numbers in an Excel spreadsheet and come up with an ideal strategy. But nevertheless, it's very important. Never think that just because you can't quantify something into a tangible product that it is not important. Proper snap distribution is just one of many things of that nature that are very important.
Wednesday, July 25, 2007
For one -- and first and foremost here -- we can compare an individual quarterback's performance versus the average performance of the eight SEC quarterbacks that a particular defense faced in a given year. At bottom, it gives a lot of context to exactly how well a quarterback performed. Raw numbers only tell us, generally speaking, whether or not a player performed well in a game or a season, but comparing their raw numbers to the overall league average tells us not only how well they performed, but how well they performed in comparison to how well other players at the same position performed against those same teams.
So how did John Parker Wilson do as a whole, compared to the overall leave average?
All in all, Wilson's numbers were almost exactly identical to the league average, not really better and not really worse. Being very picky about, Wilson's numbers were generally just a hair below the league average, but by a very small and essentially insignificant amount. All told, Wilson had a higher completion percentage (56.52 for Wilson, to the league average of 56.37), and he also threw for slightly more yards per attempt (7.07 for Wilson, to the league average of 7.00). However, Wilson did have a lower touchdown rate, and a higher interception rate. All told, the combined QB rating of conference opponents against the eight SEC teams we faced in 2006 was 124.3, and Wilson clocked in just a hair below that at 122.4.
In terms of over / under performing the average, Wilson's best day came against LSU in Tiger Stadium. Though LSU held quarterbacks as a whole to under 150 passing yards per game and a completion percentage of lower than 50 per cent, Wilson threw for 291 yards against LSU, and completed 62.8 percent of his passes. Wilson's worst day came, oddly enough, the week before against lowly Mississippi State in Bryant-Denny Stadium. The Bulldogs struggled to defend the pass all year long, as opponents, as a whole, completed almost 60 per cent of their passes for over 220 yards per game, with a 3:2 touchdown to interception ratio. As with most other things that day for the Crimson Tide, it didn't quite go as planned. Wilson completed less than 50 per cent of his passes for under 200 yards total, and threw two interceptions against no touchdowns. Given the succession in games, I suppose an outhouse to penthouse reference would be fitting here.
All in all, though, the results are pretty good. Wilson was a first year starter, suffering from poor coaching and likely poor play-calling. Moreover, the offensive line was poor, and the running game was completely ineffective; arguably our worst since 1993. On top of all of that, Wilson played nearly all year with a badly sprained right ankle (i.e. his plant foot). About the only thing that Wilson had going for him in 2006 was a good receiving corps, and while good (Hall was great), after Brown was injured in the Ole Miss game, we never had another solid receiving threat. All told, while the wide receiver corps was pretty good (due to Hall's performance), it was far from extraordinary. Still, despite all of that going against him, Wilson put up average production numbers in 2007. Not a bad showing at all when you consider the circumstances.
It's just another good sign about the ability and potential of our starting quarterback.
Tuesday, July 24, 2007
SEC Media Days is currently underway, and a week after Nick Saban goes to Hoover (along with Antoine Caldwell and Simeon Castille), Fall practice will begin. The freshmen will arrive, heads will be shaved, jersey numbers will be assigned, the pads will shortly start popping, we'll ooh and ah at players that are -- whoa, big shock -- not fat and out of shape, and once again for the second time in 2007 the practice facility will see more pizzas made than the Domino's on 15th Street. Spectators will crowd around the green-tarp-covered gates, the truly dedicated will get a glimpse via binoculars from the 4th floor of the south end of Mary Burke East (it's a nice view, trust me), tens of thousands will watch over-and-over two minute clips of the Tide running elementary drills on TideSports, and the position battles will quickly take shape.
I say to hell with the flowers blooming and the days getting longer, nothing is better than football right around the corner.
So what do we plan to do around here on Outside The Sidelines?
We're going to have some in-depth coverage, to say the least. I don't know exactly what we're going to do, but we'll try to get quite a bit done. I must say that I do enjoy this blog. It combines two of my favorite things: empirical analysis and Alabama football. So, I'll try to do as much as time permits.
For certain I am going to do game-charting, so we can have some very in-depth data to analyze. Unfortunately, at the moment I only have the play-by-play data from RollTide.com, ESPN.com, and other sites, and while decent, it's nowhere near as in-depth as I would like for it be. There are simply so many questions that I have that I simply cannot answer with the data that I have at hand. However, with me doing game-charting for the 2007 season, I'll have that data at my fingertips. At the end of the day, it's just going to improve the quality and depth of the content here by leaps and bounds.
I do plan on doing a very in-depth game-by-game recap, and I have a few more ideas.
If you have any suggestions, drop me a line in the comment box.
According to various sites, Michael Ricks has qualified, and will report for Fall practice for August 3rd.
And hey, even us nerds have to celebrate once in a while.
At bottom, this is pretty big news. If you recall from my earlier prognostications on starters for 2007, you know about our safety situation. I said that that Ricks, if he qualified, could take one of the starting jobs. Rivals.com, and I'll be doing an in-depth analysis in the next few months on recruiting services (just a little FYI), rated him as one of the top five JUCO prospects in the country, and he could make a huge impact.
In a worse-case scenario, Ricks will see a good bit of playing time in 2007 if he can stay healthy ("good bit" means somewhere in the range of 300 snaps), and will more than likely get a starting job.
This is big news for us. Ricks' presence could, and should, have a very tangible impact on the overall quality of our defensive production in 2007.
LSU and Arkansas were two of the top pass defenses in the SEC in 2006. Interestingly enough, though, on the surface you might not expect that they were all that good. Combined, they gave up twice as many touchdowns as they created interceptions, and the Bayou Bengals and the Hogs finished tenth and eleventh in interception rate, respectively. But make no mistake about it, these two pass defenses were damn good. LSU and Arkansas finished first and second in completion percentage, and third and fourth in yards per completion, respectively.
LSU and Arkansas bring an opportunity to make a great point about effective pass defense. In reality, in terms of coverage of pass defense, all that really gets any real attention is touchdown passes and interceptions. That's what Stu Scott blows his load over on SportsCenter, and that is what is replayed a million times on YouTube. But that is far from what effective pass defense is about. As LSU and Arkansas made a great example of in 2006, effective pass defense is mostly about forcing a large number of incomplete passes, and limiting the yardage allowed when a pass is completed. Oh sure, limiting touchdowns and maximizing interceptions are things that you should certainly strive for, but in reality touchdowns and interceptions are a relative rarity, with a touchdown or an interception occurring on only about 8.5% of total pass attempts (8.47% of the time in 2006). Even the SEC leader in interception rate snagged interceptions on only a mere 5.53% of all passes thrown (South Carolina). At bottom, the truth of the matter is that the best pass defense is a pass falling harmlessly to the ground, or if a pass is caught by the offense, tackling the receiver immediately and limiting him to a relatively short gain. If you can consistently do that, almost regardless of what happens with touchdowns and interceptions, you are going to have a good pass defense. Again, LSU and Arkansas proved that point quite nicely with their 2006 performances. On the other hand, if you can't consistently force incomplete passes -- however you go about doing that -- and if you can't limit yardage in the event that there is a completion, you are in major trouble. Even if you snag more interceptions than any other team in the conference, interceptions are such a rarity in terms of overall passes that if you can't force incompletions and limit yardage on the other 95% of throws, you are not going to have a very good pass defense.
South Carolina exemplifies the last point. The Gamecocks in 2006 were great at intercepting the football, as they led the SEC in interception rate. They snagged eleven interceptions on 199 passing attempts, giving them a league-leading 5.53% interception rate. However, they finished seventh in the conference in completion percentage (58.29% of passes against them were completed), and finished tenth in the conference in yards allowed per completion, giving up roughly 13.5 yards per catch. And, of course, the South Carolina pass defense was pretty mediocre, all things considered. Again, it doesn't matter if you are great at intercepting the football, interceptions are such a rare occurrence that you will not be able to build a good pass defense off of just intercepting the football. You must be able to force incompletions and limit the amount of yardage gained when the ball is caught. South Carolina was not able to do that particularly well in 2006, despite their knack for interceptions, and as a result they did not field a particularly good pass defense.
Georgia, too, brings about an opportunity to showcase another point. The Dawgs had a very good pass defense in 2006, no two ways about it. Though they finished sixth in the conference in completion percentage, they were second in yards per completion, second in interception rate, and fourth in touchdown rate. At bottom, this was a good pass defense. But what is the point I'm getting at? The point is that the Dawgs had a very good pass defense, and that was due in large part to a great pass rush from the defensive ends. All told, Quentin Moses and Charles Johnson combined for 15.5 sacks in 2006, and are now playing in the NFL. The point is that good pass defense can come from either a good pass rush from the defensive line, and / or good pass coverage from the secondary. You often times see teams with relatively poor secondaries do quite well against the pass because the rush the passer so well with their front four, and likewise, you often times see relatively poor pass rushing teams do quite well because the secondary covers so well. Georgia proves that point quite nicely. Their secondary -- though stocked with solid players -- perhaps didn't have as good of secondary players as the numbers would suggest, but nevertheless they had a great pass defense because they got such a good, consistent pass rush from their defensive ends.
And that's going to be about it for the lessons, but moving on...
I said earlier that LSU and Arkansas were two of the best pass defenses in the conference, but I didn't say they were the best. Reason being: the Florida Gators. Though the Gators did allow opponents to complete a lot of passes (9th in the conference), they were exceptional in every other category. Granted, they did give up quite a few completions, they generally didn't give up very much yardage. They finished first in the conference in yards per completion, and did so by a very wide margin over second place Georgia. Moreover, despite the fact that teams threw more passes against the Gators than any other SEC team (290 attempts), they allowed the fewest touchdown passes in the SEC, thus leading the SEC in touchdown rate by an incredibly massive margin. Beyond that, the Gators had more than twice as many interceptions as touchdowns, thus leading the conference in interception-to-touchdown ratio, again by a massive margin. What can you say? These guys really had it together in 2006.
The Auburn pass defense was generally regarded as good, but it wasn't anything overly special, and it was a bit intriguing. They didn't give up very many completions (fourth in the conference in completion percentage), but when they did give up completions, they generally went for big yardage (eighth in the conference in yards per reception). Moreover, they didn't intercept the ball particularly well (ninth in the conference in interception rate). However, they did finish third in the conference in touchdown rate, which makes me wonder a bit. It wasn't a particularly great pass defense, so why did they finish so high in that category? A few reasons are possible. One, they may have not given up very many long passes; two, they may have tightened up considerably in the redzone, where teams could have opted more towards the run; or three, they may have just played that much better in the red zone with a smaller space to defend. Either way, it'd be nice to break things down even further to see exactly what was going on there.
Tennessee, as a whole, was pretty middle of the road. They did quite well in a couple of categories, finishing third in completion percentage and fourth in interception rate. But they also didn't do too well in a couple of categories, finishing seventh in yards per completion and ninth in interception rate.
The Ole Miss pass defense was probably a bit better than most gave it credit for being, at least in the eight conference games. They finished fifth in completion percentage, and fifth in yards per completion. They were the definition of average in touchdown rate, finishing sixth in the conference. The problem with the Rebels was interceptions. For whatever reason, Johnny Reb couldn't get an interception if their life depended on it. They racked up a mere three interceptions, and finished dead last in terms of interception rate, with its interception rate a puny 1.72%. Again, though, this was a better pass defense than most gave it credit for. The Rebs only went 2-6 in conference play, but they scared the living hell out of Auburn, Alabama, LSU, and Georgia, and came very close to knocking each member of that group off. Obviously, it wasn't the atrociously anemic offense that kept them in those games, and run defense alone can't do it. The Rebs pass defense, while not particularly good, was pretty decent.
And then we get to the really ugly pass defenses...
Kentucky finished tenth in the league in completion percentage, and as often as they gave up completed passes, those completions nevertheless went for huge yardage. These weren't dink and dunk throws that failed to rack up very much yardage. All told, UK finished dead last in the SEC in yards per completion, giving up 15.66 yards per catch. They were the only SEC team to give up more than 14 yards per reception. Though they intercepted the ball quite well (5th in the SEC in interception rate), they struggled in terms of touchdown rate, finishing 8th. At bottom, this pass defense was just terrible. All year long, teams consistently threw the ball at will with little trouble. How in the world Mike Archer, the defensive coordinator, was hired by NC State, I'll never know.
Mississippi State, too, was equally terrible. They finished eleventh in completion percentage, and eleventh in yards per reception. They did intercept the ball relatively well, sixth in the conference in interception rate, but did give up a lot of touchdown passes, as they finished eleventh in the conference in touchdown rate. At bottom, it was a terrible pass defense. Teams completed a high percentage of passes, and those completions went for a lot of yards. All year long, teams easily moved the ball through the air on the Bulldogs. About the only good game their pass defense had was, unfortunately, against Alabama. What can you say? They went 3-9 for a reason.
And last, and certainly worst, we have the Vanderbilt Commodores. The enlightened 'Dores might win in the game of life, but they didn't win on the field in 2006, and their horrendous pass defense was a large reason why. The 'Dores finished dead last in completion percentage, as teams completed almost 65 per cent of their passes against them. Vanderbilt was the only team to have a completion percentage above 60 per cent. They finished ninth in yards per completion, seventh in interception rate, and dead last yet again in touchdown rate. What can you say? They were just, well, Vanderbilt.
Monday, July 23, 2007
Overall pass defense, however, ranks teams based on the total number of passing yards allowed. And, as Football Outsiders pointed out, "Ranking pass defenses on total yardage allowed is phenomenally stupid. Poor teams will give up fewer passing yards because opponents will stop passing and run out the clock instead." Of course, the inverse of that is also true for good teams. And, without going too in-depth here, there are other pitfalls with ranking teams based on pure yardage allowed alone.
So, I decided to do an in-depth analysis of the 2006 Alabama pass defense. As usual, I looked at every SEC team in conference play (eight games), and saw how they all stacked up.
Long story short, for the Crimson Tide, it's not too pretty.
The best part of our 2006 pass defense was that we intercepted a relatively high number of passes. We snagged 10 interceptions in conference play in 2006, which tied us for 3rd in the conference in interceptions. Moreover, those interceptions weren't just a fluke due to opposing teams throwing a ton of passes against us (thus giving our defensive backs more chances at getting interceptions). We finished 3rd in the conference in interception rate (total interceptions divided by total pass attempts), as 5.21% of passes thrown against our defense resulted in interceptions.
But, honestly, we shouldn't read too much into those interceptions. Yes, we did intercept ten passes in 2006 conference play, but those interceptions were sporadic at best. Six of those interceptions came in the first two conference games, when we faced a redshirt freshman quarterback in his first SEC game (Chris Nickson, Vanderbilt) and a true freshman (Mitch Mustain, Arkansas). After that, though, things fell off dramatically. Over the course of the final six conference games, we racked up a mere four interceptions, and three of those came in the first half of the Tennessee game. Combined in the Florida, Ole Miss, Mississippi State, LSU, and Auburn games, and the second half of the Tennessee game, we intercepted one pass. In those 150+ passing attempts, the only interception was Jeffrey Dukes interception returned for a touchdown against Mississippi State.
At bottom, we really didn't force turnovers that well, as a whole, in 2006.
After that, though, it gets worse.
In terms of completion percentage, Alabama finished eighth in the conference in 2006, as opposing quarterbacks completed almost 59% of their passes against the Crimson Tide (specifically, 58.85% of passes). That's a very disappointing statistic, to say the very least. The entire Kines' defensive scheme -- in terms of defending the pass -- was built upon abandoning an aggressive pass rush in favor of dropping multiple defenders into zone coverage, thus forcing opposing quarterbacks to throw into heavy zone coverage. The underlying idea to it was that opponents would not be able to complete a high percentage of passes, and those that were completed would generally go for a relatively short gain. People often made a big deal about our lack of sacks (last in the conference in total sacks and adjusted sack rate), but that really would not have been a problem at all if we had successfully defended the pass with the heavy zone coverages.
Unfortunately, at least in 2006, that didn't work as planned. As noted earlier, teams did complete a high percentage of passes (almost 59%, eighth in the conference), and what was worse is the fact that when those passes were completed, they gave up relatively large chunks of yardage. All told, we gave up on average 12.44 yards per completion, which put us sixth in the conference in yardage allowed per completion.
Moreover, we gave up touchdown passes at an alarmingly high rate. All told, we allowed a touchdown pass on 5.73% of the passes attempted against us, and that put us tenth in the conference -- trailing only pass defense hapless Vanderbilt and Mississippi State -- in terms of touchdown rate (passing touchdowns divided by total passing attempts).
What can I say? Despite assertions to the contrary that the 2006 pass defense was quite good, that argument simply doesn't hold water once you analyze it. The truth of the matter was that the 2006 pass defense was, at absolute best, mediocre. The Kines defense was based on not rushing the passer, and dropping multiple defenders (usually between six and eight) into heavy zone coverages. As noted earlier, the idea was that opposing quarterbacks would have trouble completing passes against the heavy zone coverages, and when they did complete passes, those passes would generally go for small gains. But, again, it just didn't work that way. Opposing quarterbacks did complete a high percentages of their passes, and those passes, when completed, generally went for relatively large gains. Moreover, passing touchdowns came at an alarmingly high rate, and though from a raw numbers perspective we intercepted the ball quite a bit, the truth of the matter is that the interceptions were very sporadic at best, and after the Arkansas game we really just could not get it done on that front.
Despite what has been often repeated, the 2006 pass defense just wasn't that good.
Since I broke down the entire conference, I'll make a post tomorrow analyzing the rest of the SEC in 2006 pass defense.
Sunday, July 22, 2007
I noticed in several comments regarding my most recent article on red zone production that some people have concluded from my analysis that play-calling was not a problem. At bottom, that's wrong.
We were "balanced," by most definitions of the term, offensively inside the red zone in 2006. However, never think that being statistically balanced in terms of a run-pass ratio means that play-calling is therefore, by extension, good.
You have to realize that effective play-calling and offensive balance are two very different, and often unrelated, concepts. Effective play-calling involves calling the correct plays based on what the opposing defense does, and whatever objective you are trying to accomplish at the time (whether it be scoring points, controlling the ball, or running out the clock). You can be balanced offensively and still have horrible play-calling.
For us in 2006, it means that when people say that play-calling wasn't bad based on my previous red zone analysis, they are wrong and mis-interpreting what the analysis really says.
What I did find in the analysis was that the "conservative" argument for our red zone struggles was not true. The argument goes something like this: The offense did fine until we got in the red zone, and for whatever reason, Shula and co. ran the ball entirely too much, opposing defenses knew what was coming, and they easily stopped it, hence our red zone troubles. And that is demonstrably false. The truth is, we were very balanced in the red zone, it's just that neither the run or the pass worked particularly well. And actually, believe it or not, we had more success running the football in the red zone than we did throwing the football. Running the football netted, on average, 1.98 yards per carry, while passing the football we only netted, on average, 1.56 yards per carry. Moreover, passing players much more often than running plays resulted in very bad plays, such as fumbles, turnovers, and large amounts of lost yardage.
But does that balance mean play-calling was good? Again, no. Play-calling, despite being balanced, may well have been terrible in 2006. In fact, given the overall incompetence of the Shula regime, I very much expect that it was terrible, but we can't prove that one way or the other with the data I have compiled at the moment.
Though we were balanced, I imagine that play-calling was indeed a major problem with the lack of production of our 2006 red zone offense.
Thursday, July 19, 2007
At bottom, the guys at Pro Football Reference wanted to know if capping blow out wins in the NFL would make the Pythagorean projection more accurate.
And what did they find?
They found that capping blowout wins in the NFL does not make the formula any more accurate in terms of determining how good teams truly are. However, they did note that capping blowout wins in college football would make the that formula more accurate in terms of determining how good teams truly are.
If you recall from making posts on Pythagorean Wins, I have been doing the same thing for months in regard to college football. I've been making the argument that games against the Sisters of the Poor (Western Carolina, etc.) should not be used in Pythagorean projections because the talent disparities between the teams are so great, and one team is effectively guaranteed a win. The bigger opponent can basically name the score on the Sister of the Poor in question, and that massive blowout win inflates that team's Pythagorean projection to make it look like they should have more games than they did.
At bottom, when you do it for the entire SEC, it essentially means that basically every SEC team underachieves in a year, i.e. they didn't win as many games as the formula says they should have.
The same analysis, as the guys at Pro Football Reference noted, does not work for the NFL because the teams are so similar. Even when the best NFL teams play the worst, for example the Raiders, the teams are still relatively closely matched. You never run into anywhere near the disparities that you do in college football when, for example, when USC plays Idado (as they do in the 2007 season opener). To get that same disparity in the NFL, you would have to have a team like the New England Patriots play someone, for example, from NFL Europa, and of course that never happens.
So what does it mean for Pythagorean wins for college teams?
It means, as I've been saying for months, you should not factor in the cupcake games, and focus much more on conference games.
Perhaps I should tweak my Pythagorean formula slightly more, say including all "big" games (conference opponents and legitimate out-of-conference opponents, including possibly bowl games), or perhaps I should count the games against the Sisters of the Poor, but somehow weight down that game to where it is not very important. Doing so, of course, has pitfalls of its own (namely, you are comparing apples and oranges because then teams are playing very different schedules, unlike when you compare only conference play when schedules are very similar, so strength of schedule would became a major factor that would have to be evaluated), but perhaps it would make the formula more accurate.
Maybe that should be a research topic for another day.
Wednesday, July 18, 2007
But, of course, all turnovers are not created equal.
Despite early football research (The Hidden Game of Football, by Pete Palmer and Bob Carroll) indicating that the value of a turnover had a fixed points value, regardless of where it occurred on the field, later research showed that the value of a turnover increases when you get closer to the end zones, and decreases when you get closer to mid-field. It doesn't matter if you turn it over near your own goal line, or near your opponent's goal line, because it is costing you points either way; where you turn it over in that sense only influences whether you will lose points based off of you scoring less, or your opponent scoring more. All told, the value of a turnover, based on research from Football Outsiders, varies from 3.8 points to 4.25 points, depending on the field position of where the turnover occurs.
A variety of other factors can also greatly affect the value of a turnover, including the time remaining in the game when it occurred, who was leading and by how much, and how far the turnover was returned by the defense, just to name a few.
So, exactly how costly were the eight Wilson interceptions and four fumbles in conference play in 2006? Let's look at each interception and fumble, one-by-one, to get a better look.
For the interceptions...
- Wilson was intercepted once in the Vanderbilt game. It occurred in the second quarter, on a first and ten, and we had the ball at our own 34. Vanderbilt, after a zero yard return, took over at their own 47 yard line.
- In a 0-0 game in the first quarter against Florida, we were in Florida territory at the 34, facing a 3rd and 8. Wilson's pass was intercepted by Ryan Smith, who returned it 29 yards to the Alabama 47.
- Trailing Florida 14-13 with 8:30 to go in the fourth quarter, Wilson was intercepted by Ryan Smith on 2nd and 10. The interception gave Florida the ball at the Alabama 34, and set up a touchdown that put the Gators up 21-13.
- Trailing Florida 21-13 with 4:30 to go, we had the ball, 2nd and 10, at the Florida 46, driving in an attempt to score a game-tying touchdown and two-point conversion. Wilson was intercepted after his pass sailed high over the head of Keith Brown into the arms of Reggie Nelson, who raced 70 yards for a Florida touchdown, putting the Gators up 28-13, and effectively ending the game.
- Trailing Mississippi State 17-10 with three minutes to go in the second quarter, we had the ball 2nd and 8 at the Alabama 47, and Wilson was intercepted by Quentin Culbertson, who returned the interception for a touchdown, giving MSU a 24-10 lead.
- Trailing LSU 28-14 with 3:15 to go in the third quarter, we had the ball 3rd and 12 from LSU 23, desperately needing a conversion to bring the game within one possession. Wilson fired down the field and was intercepted by Chevis Jackson.
- Trailing Auburn 22-15 with less than a minute to go, we had the ball 2nd and 7 at our own 44. Wilson looked to his right and threw the ball, where it was intercepted at the Auburn 38 by David Irons, which ended the Iron Bowl and ultimately the Mike Shula era.
- Tied 10-10 with Arkansas in the third quarter, we had the ball 3rd and 13 at the Arkansas 48. Arkansas made the perfect blitz / coverage call, and Wilson was doomed from the start. Instead of taking the sack, Wilson tried to stay in the pocket and make a player, where he was hit. The ensuing fumble was scooped up by Randy Kelly and returned 39 yard for an Arkansas touchdown; Arkansas 17, Alabama 10.
- Trailing LSU 21-14 in the closing seconds of the first half, we had the ball 1st and goal at the LSU 8. We were looking for a touchdown to go into halftime tied, or at least a field goal. On first down, however, Wilson was sacked by Daniel Francis and fumbled. LSU recovered, and ran out the clock on the first half.
- Leading Auburn 3-0 with the ball 3rd and 6 at our own 25, Wilson was sacked by Quentin Groves. A fumble ensued, which Auburn recovered, and that led directly to an Auburn touchdown, 7-3 Auburn lead.
- Trailing Auburn 7-3, two Alabama offensive plays after the previous sack and fumble, Wilson was sacked on a 2nd and 16 from the Alabama 14 by Quentin Groves. Another fumble ensued, Auburn again recovered, and that led directly to another Auburn touchdown, 14-3 Auburn lead.
One, Wilson did turn the ball over entirely too much in 2006. All told, he had eleven turnovers in eight conference games, and that is simply entire too many. By comparison, in 2005, Brodie Croyle had only two turnovers in eight conference games, and both of those came in the Mississippi State game. Granted, Croyle's turnovers in 2005 are very much at the lower end of the turnover spectrum, but nevertheless they do highlight the value that can be found in limiting turnovers. The 2005 offense was terrible, no two ways about it, but Croyle did protect the football and thus didn't give away points in that sense. Wilson couldn't replicate Croyle in 2006, and that was a large reason 10-2 crashed into 6-7.
Two, Wilson's turnovers, when they did occur, were extremely costly to the team. Ten of Wilson's eleven turnovers came when we were either tied in a game or trailing in a game, and that is nothing short of brutal. Turnovers are killers regardless of when they occur, but if you turn the ball over when you are tied or behind, it can really just end your chances of winning, which it did with us on many occasions in 2006. Moreover, the majority of Wilson's turnovers either occurred outside of the area between the thirties, and three led directly to touchdowns for the opposing defense. And, Wilson's interceptions were generally returned for a lot of yards, with opponents averaging around 20 yards per return. There are no two ways about it, though turnovers are generally costly, the majority of turnovers committed by Wilson were extraordinarily costly to the Tide.
Three, it is up for Wilson to improve. Many blamed the fumbles on the offensive line -- particularly in the Iron Bowl in the case of Chris Capps -- but at the end of the day, the ultimate responsibility falls upon Wilson to protect the football. It does not matter if all five offensive lineman decide to scratch their privates instead of block the defensive linemen, it is still up to Wilson to protect the football. He should see the situation, and secure the football. If he has to take a sack, then so be it. Not doing so is a failure upon Wilson's part, not the offensive line, and it is a particularly damning failure when the oncoming defender -- as was the case with Quentin Groves in the Iron Bowl -- is well within Wilson's line of sight.
I do not believe that we should be overly critical of Wilson, mind you. He did have a solid season in 2006, and did quite well. And he was a sophomore, a first-year starter, and it is essentially a given with quarterbacks of that nature that they will commit several costly mistakes early on their careers, and Wilson was no different. The point of it all is that -- and the analysis says essentially this -- Wilson hurt his team quite badly in 2006 with turnovers, and that he must improve that in the 2007 season if we are to have the success that we all want, whether it be as an offense or as a team.
Beyond completions, attempts, and touchdowns, perhaps the thing most should keep their eye on this season with regard to John Parker Wilson is turnovers.
But is there any underlying validity to all of those "high expectations" statements, or is just a tired, baseless, and overused assertion for those who want to criticize Alabama?
Well, there is only one way to find out. Let's look at past Alabama coaches since Bryant retired in 1982.
Ray Perkins, of course, succeeded Bryant in 1983. Despite leading Alabama to its first losing season in almost 30 years, Perkins returned and had good seasons in 1985 and 1986. The Alabama fan base was generally pleased with him, and he was not on the hot seat. Perkins left Alabama of his own volition, when he was wooed back to the NFL by Alabama business school graduate and financier Hugh Culverhouse, who was the owner of the Tampa Bay Buccaneers. Had Perkins not left on his own, he would have easily held on to the Alabama job for the foreseeable future.
Bill Curry was chosen to replace Perkins, and despite a less-than-solid track record at Georgia Tech, Curry did well at Alabama. He went 26-10 in three seasons, won the SEC in 1989, and was 6-0 against Tennessee and Penn State. Some people -- either out of ignorance or in a lame attempt to re-write history -- like to claim that we ran Curry off after his third consecutive loss to Auburn in 1989, and that is factually wrong. Granted, no one was happy with Curry's 0-3 record against in-state rival Auburn, nor the fact that they ended our undefeated season in 1989 -- and seriously, what half-way decent fan base would be happy with that? -- but the fact remains that Curry was not ran off by Alabama. He was not fired in the month following the loss to Auburn, and actually led the Tide in a close loss to eventual national champion Miami in the Sugar Bowl. Curry was slated to return to be the Tide coach in 1990, but he left for Kentucky over a contract dispute. At the end of the 1989 season, we offered Curry a new contract that contained clauses that he was not happy with (specifically, no raise and removal of his power to hire and fire assistants), and thus he bolted to Kentucky. Bill Curry himself even admitted as much in the book "The Uncivil War: Alabama v. Auburn 1981-1994." You can read for yourself by purchasing the book and reading chapter eight. At bottom, Curry was not "ran off" or fired from Alabama because he didn't meet our supposedly "high expectations." In reality, Curry left of his own volition, when he could have came back to Alabama in 1990, because of a contract dispute with the university.
Gene Stallings was hired after Curry left for Kentucky, and he was wildly successful. In 1996, though, Stallings chose to retire. Some claim that Stallings was ran off, and his "retirement" was just putting a nice face on the situation. That, too, much like the Curry claims, is incorrect. Stallings left of his own accord due to disputes with university president Andrew Sorensen and athletic director Bob Bockrath. Stallings felt that Sorensen and Bockrath were micro-managing the football program -- and of course the passage of time proved Stallings right in the years that followed -- and Stallings felt with excessive administration meddling in his football program, it would simply make for an untenable situation. Stallings was very much in the Bryant mold -- who contractually demanded that he would report to a laissez faire university president and no one else -- and allowing active and excessive meddling in his program by administrators was simply unconscionable to a man like Stallings. As a result, he left of his own accord. Certainly, there were some disgruntled rumblings about Stallings -- watching his much-superior teams squad about wins by a handful of points due to poor offenses was a very frustrating thing to watch, as you would expect -- but the overwhelming majority of the Alabama fan base knew the magnitude of what all Stallings accomplished in his seven years (a national championship and 70 wins), and the fan base was greatly saddened when he announced after the 1996 Iron Bowl that he would be retiring following the bowl game. "High expectations" didn't get Stallings; far from it, in fact.
Dubose came next, and he got more leniency than you could imagine. After a disastrous 4-7 season in 1997, he had a mediocre year in 1998 (close wins in a couple of games we should have lost, namely LSU) before getting annihilated by Virginia Tech in the Music City Bowl. 1999 saw lightning in a bottle, and 2000 was a disaster. And oh yeah, remember the secretary affair scandal, and the losses to Louisiana Tech, Kentucky, and Central Florida? But Dubose, as terrible as he was, did not get fired. Actually, he submitted his resignation to Mal Moore after the third game of the 2000 season (the 21-0 debacle against Southern Miss), and Mal refused to take it at the time, but that was the end of his tenure. Granted, any decent school would have fired him, but the point remains he was never fired because it never came to that. Dubose saw how the job overwhelmed him, and he resigned.
Fran came next, and of course Fran left of his own volition for Texas A&M.
Then it was Shula, who stayed at Alabama for four seasons. In those four years, Shula had one winning season, a combined 1-12 record against Tennessee, Auburn, LSU, and Georgia, and never finished higher than third in the SEC West. In three of his four seasons, his teams finished forth or worse in the SEC West. Shula was fired by athletic director Mal Moore approximately nine days after the 2006 Iron Bowl loss. In four seasons, his record was just barely .500.
So... that is it? All of these supposed "high expectations" have resulted in one coach fired? Basically, that is what that entire argument -- often repeated as it is -- reduces down to once you actually analyze it.
But what about the firing of Shula? Was that "high expectations," or have other schools done essentially the same thing in similar situations?
Let's look at that, too.
Once you delve into recent SEC football history, you see that many SEC schools have fired coaches within the past decade despite those coaches having won considerably more than what Shula won at Alabama. Let's go through some examples.
Auburn fired Terry Bowden mid-way through the 1998 season, Bowden's sixth on the Plains, after a 1-5 start. In his first five seasons at Auburn, Bowden went 45-12-1, and won 18 consecutive games to start his tenure. In 1997, the year before he was fired, Bowden's Tigers went 10-3 -- including wins over Alabama, Georgia, and LSU -- won the SEC West, came within one point (30-29) of beating Peyton Manning-led Tennessee in the SEC Championship Game, and wrapped up the season with a win in the Peach Bowl over Clemson. Moreover, Bowden led Auburn to -- at the point -- only its second undefeated season, and had a winning record over their two major rivals: Alabama and Georgia.
Georgia fired Jim Donnan at the end of the 2000 season. After inheriting Ray Goff's disaster, Donnan went 5-6 in his first year in Athens. The next four seasons, however, were much more successful, to say the least. From 1997-2000, the year he was fired, Donnan went 35-13, never having fewer than eight wins in a season, and he also racked up four bowl wins in those four years. Donnan was fired in 2000 following back-to-back 8-4 seasons.
LSU fired Gerry Dinardo in 1999 after back-to-back losing seasons. Dinardo took over the Archer / Hallman disaster that saw six consecutive losing seasons at LSU, and did quite well in his first three years. During that span, Dinardo went 26-9-1 and won three consecutive bowl games, quite a feat considering that LSU had only won three bowl games in the twenty-five years prior to Dinardo's arrival. And, his three bowl victories in three seasons is the only time that has ever happened in the history of LSU football. Nevertheless, he was fired.
Florida fired Ron Zook in 2004 after three consecutive winning seasons. All told, in his three seasons, he went 23-15, and made three bowl appearances. Even so, after a 4-3 start to the 2004 season, Zook was fired.
So, as you see, Alabama really didn't do anything out of ordinary when we fired Shula. All told, Shula had a sub-par record in his four years at Alabama, and many other SEC schools the past few years have fired coached who have won considerably more than Shula did. But did we hear about how Auburn, Georgia, LSU, and Florida have supposedly ill-fated "high expectations"? Of course not. So why do you constantly hear that with Alabama? Dear Mr. Casual Reader, meet Mr. Double Standard.
Moreover, I should make another point. When Auburn, Georgia, LSU, and Florida fired their coaches, their successors all did much better, and that group of replacement hires is comprised of Tommy Tuberville, Mark Richt, Nick Saban, and Urban Meyer. No coach not in that group has won the SEC since Steve Spurrier did it in 2000. Obviously, having high expectations is nowhere near the disaster-guaranteed attribute that most "experts" and fans believe it to be.
Putting it all together, it seems that the argument of "high expectations" has no validity whatsoever. Though it may be repeated ad nauseum, it is nevertheless wholly untrue. Just because people swore for centuries that the world was flat did make it flat, and the same reasoning structure applies here. Certainly you must win and win big in order to remain the head coach at Alabama, but with the massive amount of resources poured into college football these days, that is true at any top Division 1-A program. The old Bryant saying, "Be good or be gone" certainly applies big-time modern day college football. However, there is not even one tiny scintilla of evidence that would even remotely suggest that we have unduly high expectations that are far out of line of what is expected with other top programs.
At bottom, as is the same with the "Bear Is Dead" argument, the whole "high expectations" argument is patently absurd, and really only reflects upon the idiocy of whomever is making the argument.