Welcome to Regression Alert, your weekly guide to using regression to predict the future with uncanny accuracy.
For those who are new to the feature, here's the deal: every week, I dive into the topic of regression to the mean. Sometimes I'll explain what it really is, why you hear so much about it, and how you can harness its power for yourself. Sometimes I'll give some practical examples of regression at work.
In weeks where I'm giving practical examples, I will select a metric to focus on. I'll rank all players in the league according to that metric and separate the top players into Group A and the bottom players into Group B. I will verify that the players in Group A have outscored the players in Group B to that point in the season. And then I will predict that, by the magic of regression, Group B will outscore Group A going forward.
Crucially, I don't get to pick my samples (other than choosing which metric to focus on). If I'm looking at receivers and Cooper Kupp is one of the top performers in my sample, then Cooper Kupp goes into Group A and may the fantasy gods show mercy on my predictions.
Most importantly, because predictions mean nothing without accountability, I track the results of my predictions over the course of the season and highlight when they prove correct and also when they prove incorrect. At the end of last season, I provided a recap of the first half-decade of Regression Alert's predictions. The executive summary is we have a 32-7 lifetime record, which is an 82% success rate.
If you want even more details, here's a list of my predictions from 2020 and their final results. Here's the same list from 2019 and their final results, here's the list from 2018, and here's the list from 2017.
The Scorecard
In Week 2, I broke down what regression to the mean really is, what causes it, how we can benefit from it, and what the guiding philosophy of this column would be. No specific prediction was made.
In Week 3, I dove into the reasons why yards per carry is almost entirely noise, shared some research to that effect, and predicted that the sample of backs with lots of carries but a poor per-carry average would outrush the sample with fewer carries but more yards per carry.
In Week 4 I discussed the tendency for touchdowns to follow yards and predicted that players scoring a disproportionately high or low amount relative to their yardage total would see significant regression going forward.
In Week 5, I revisited an old finding that preseason ADP tells us as much about rest-of-year outcomes as fantasy production to date does, even a quarter of the way through a new season. No specific prediction was made.
In Week 6, I explained the concept of "face validity" and taught the "leaderboard test", my favorite quick-and-dirty way to tell how much a statistic is likely to regress. No specific prediction was made.
In Week 7, I talked about trends in average margin of victory and tried my hand at applying the concepts of regression to a statistic I'd never considered before, predicting that teams would win games by an average of between 9.0 and 10.5 points per game.
In Week 8, I lamented that interceptions weren't a bigger deal in fantasy football given that they're a tremendously good regression target, and then I predicted interceptions would regress.
In Week 9, I explained why the single greatest weapon for regression to the mean is large sample sizes. For individual players, individual games, or individual weeks, regression might only be a 55/45 bet, but if you aggregate enough of those bets, it becomes a statistical certainty. No specific prediction was made.
In Week 10, I explored the link between regression and luck, noting that the more something was dependent on luck, the more it would regress, and predicted that "schedule luck" in the Scott Fish Bowl would therefore regress completely going forward.
In Week 11, I broke down the very important distinction between "mean reversion" (the tendency of players to perform around their "true talent level" going forward, regardless of how they have performed to date) and "gambler's fallacy" (the idea that overperformers or underperformers are "due" for a correction).
STATISTIC FOR REGRESSION | PERFORMANCE BEFORE PREDICTION | PERFORMANCE SINCE PREDICTION | WEEKS REMAINING |
---|---|---|---|
Yards per Carry | Group A had 24% more rushing yards per game | Group B has 25% more rushing yards per game | None (Win!) |
Yards per Touchdown | Group A scored 3% more fantasy points per game | Group A has 12% more fantasy points per game | None (Loss) |
Margin of Victory | Average margins were 9.0 points per game | Average margins are 9.9 points per game | None (Win!) |
Defensive INTs | Group A had 65% more interceptions | Group B has 50% more interceptions | None (Win!) |
Schedule Luck | Group A had 38% more wins | Group A had 4% more wins | None (Loss*) |
When we made our schedule luck prediction two weeks ago, Group A had an all-play win% of 48.5% and an actual win% of 57.3%, while Group B had an all-play win% of 54.5% and an actual win% of 41.5%. I predicted both groups' schedule luck would regress towards neutral, and they did; Group A won 52.9% of their games over the last two weeks and group B won 51.0%, both substantially regressed from their prior performance. But I predicted that Group B would pass Group A entirely. Instead, they came up just short, so that's our second loss of the season, right?
Well... yes. We're putting it down as a loss. But when I was digging into it to see why we lost I noticed some serious irregularities in the data. I mentioned at the time of the prediction that it was odd that Group A should feature just 35 teams while Group B featured 269; since the selection criteria were symmetric around the mean, both samples should have been roughly the same size. And when I checked the teams' all-play record over the last two weeks I noticed some irregularities. Several teams, for instance, managed to "lose" All Play wins, which should be impossible; as an example, one team had an all-play record of 50-49 at the time of the prediction but an all-play record of 46-75 today. Even if they'd posted the lowest weekly score each of the last two weeks, the lowest possible all-play record should have been 50-71.
There are plenty of these irregularities scattered through the dataset, enough so that I'm convinced there was an error in the data two weeks ago, there's an error in the data today, or both. Given that, and given the overall regression in winning percentages, I think this prediction was most likely actually correct. But rules are rules and the criteria were spelled out in advance, so for our purposes we're putting it down as our second loss of the year.
Does "Offensive Identity" Regress?
Last year, I asked whether the degree to which teams favored the rush or the pass was stable or tended to regress. Specifically, I looked at the manner in which teams score their touchdowns, because as you all know, touchdowns are much less stable from sample to sample than yards. (Our yard-to-touchdown ratio loss earlier this season notwithstanding.)
I found that, yes, the teams that had scored a disproportionate number of rushing touchdowns started accruing more passing scores going forward, and vice versa. And given that the 2022 offensive environment has been so weird (see: the decline in average margin of victory), I wanted to repeat the prediction just to show how stable the finding really is.
So I headed over to https://www.pro-football-reference.com/, the best repository of NFL statistics on the planet, and I went to the 2022 season summary. Scrolling down, I found that there have been 451 passing touchdowns against 309 rushing touchdowns so far this season, a ratio of 59.3%. To get some context for that number, I checked the 2020 season and found there were 871 passing touchdowns compared to 532 rushing, a ratio of 62.1%. In 2021 it was 840 passing touchdowns to 505 rushing touchdowns, a ratio of 62.4%. It looks, then, like this league-wide ratio is fairly stable from year to year. (An interesting note: the ratio tends to skew slightly more towards rushing touchdowns later in the year, which is not surprising because running the ball is generally considered more advantageous in poor weather conditions.)
From past work, I'd found that offensive identity did vary from team to team, but virtually all teams scored between 45% and 75% of their touchdowns through the air.
Then I look at the teams from 2022. Since fewer games have been played, I expected a bigger spread in what percent of touchdowns were coming via the pass. Remember, "statistics regress more over larger samples" is one of the biggest rules of this column and why we typically make predictions for four weeks instead of just one or two (unless we can scrape together a large enough sample).
This time, only four teams fell outside of that 45-75% passing touchdown rate that we estimate is sustainable. The Packers have scored 82.6% of their touchdowns through the air and the Chiefs have scored 77.7%, while the Giants have scored just 40.9% of their touchdowns via pass and the Browns are even lower at 40.7%. (Maybe you think that since Aaron Rodgers and Patrick Mahomes II are perennial MVP candidates, they can sustain such a high rate. But the Packers have been at exactly 75% in each of the past two seasons, while the Chiefs have averaged 72.6% over that span. They may not be overperforming by much, but they are still overperforming.)
Four teams isn't a very large sample, but if we expand to include all teams that are within 5% of one of those cutoffs, we add the Buccaneers (75%), Texans (73.3%), and Dolphins (72.4%) on the high end and the Eagles (45.4%), Steelers (46.6%), Panthers (47.4%), and Bears (48.1%) on the low end.
Outliers identified, my goal is to come up with a prediction that these outliers will regress in a meaningful way. There's a bit of a conflict because I want to focus on the biggest outliers (because they'll regress the most), but I also want to include as many teams as possible to reduce the role of random chance. The other conflict with a prediction like this is how do I make the prediction easy to state and easy to track. "The Chicago Bears will see a higher percentage of their touchdowns coming through the air while the Houston Texans see a lower percentage" is true, but it's also not concrete and trackable.
So I chose to focus on the low end of the range. I already mentioned the Browns, Giants, Eagles, Steelers, Panthers, and Bears as teams scoring more rushing touchdowns than passing touchdowns. Additionally, the Cowboys, Lions, Patriots, Jets, and Titans have scored the same number of rushing touchdowns and passing touchdowns. Given that the league average is 59% (and the long-run average is closer to 62%), I'd expect this group of teams to start scoring more passing touchdowns and fewer rushing touchdowns going forward.
This gives me eleven teams to focus on. Collectively, those teams have 137 rushing touchdowns against just 122 passing, meaning they've rushed for fifteen more touchdowns than they've passed for. Over the next four weeks, through the magic of regression, I predict they'll have more passing touchdowns than rushing touchdowns.
One might wonder why I care about a statistic as esoteric as "percentage of touchdowns coming through the air". The simple truth is that how many touchdowns a team scores is a function of how good its offense is, but how it scores those touchdowns is much more random, especially over small samples. There's a compelling story we can tell ourselves about each of those "rush-heavy" teams to explain why they're so rush-heavy. The Browns have the best running back duo in the league, the Giants, Eagles, and Bears have prolific running quarterbacks, the Titans have Derrick Henry, and the Steelers, Panthers, Jets, and Patriots have had woeful passing offenses. (The Lions are a bit of an oddball, though they have dealt with a lot of injuries and discontinuity among their receiving corps.)
But the stories we tell ourselves to explain the past are often just that-- stories. So it's good to subject those stories to testing like this to determine if they're describing a real phenomenon or if they're just our way of grasping for an explanation for the inexplicable.