Welcome to Regression Alert, your weekly guide to using regression to predict the future with uncanny accuracy.
For those who are new to the feature, here's the deal: every week, I dive into the topic of regression to the mean. Sometimes, I'll explain what it really is, why you hear so much about it, and how you can harness its power for yourself. Sometimes, I'll give some practical examples of regression at work.
In weeks where I'm giving practical examples, I will select a metric to focus on. I'll rank all players in the league according to that metric and separate the top players into Group A and the bottom players into Group B. I will verify that the players in Group A have outscored the players in Group B to that point in the season. And then I will predict that, by the magic of regression, Group B will outscore Group A going forward.
Crucially, I don't get to pick my samples (other than choosing which metric to focus on). If I'm looking at receivers and Justin Jefferson is one of the top performers in my sample, then Justin Jefferson goes into Group A, and may the fantasy gods show mercy on my predictions.
Most importantly, because predictions mean nothing without accountability, I report on all my results in real time and end each season with a summary. Here's a recap from last year detailing every prediction I made in 2022, along with all results from this column's six-year history (my predictions have gone 36-10, a 78% success rate). And here are similar roundups from 2021, 2020, 2019, 2018, and 2017.
The Scorecard
In Week 2, I broke down what regression to the mean really is, what causes it, how we can benefit from it, and what the guiding philosophy of this column would be. No specific prediction was made.
In Week 3, I dove into the reasons why yards per carry is almost entirely noise, shared some research to that effect, and predicted that the sample of backs with lots of carries but a poor per-carry average would outrush the sample with fewer carries but more yards per carry.
In Week 4, I explained that touchdowns follow yards, but yards don't follow touchdowns, and predicted that high-yardage, low-touchdown receivers were going to start scoring a lot more going forward.
In Week 5, we revisited one of my favorite findings. We know that early-season overperformers and early-season underperformers tend to regress, but every year, I test the data and confirm that preseason ADP is still as predictive as early-season results even through four weeks of the season. I sliced the sample in several new ways to see if we could find some split where early-season performance was more predictive than ADP, but I failed in all instances.
In Week 6, I talked about how when we're confronted with an unfamiliar statistic, checking the leaderboard can be a quick and easy way to guess how prone that statistic will be to regression.
In Week 7, I discussed how just because something is an outlier doesn't mean it's destined to regress and predicted that this season's passing yardage per game total would remain significantly below recent levels.
In Week 8, I wrote about why statistics for quarterbacks don't tend to regress as much as statistics for receivers or running backs and why interception rate was the one big exception. I predicted that low-interception teams would start throwing more picks than high-interception teams going forward.
In Week 9, I explained the critical difference between regression to the mean (the tendency for players whose performance had deviated from their underlying average to return to that average) and the gambler's fallacy (the belief that players who deviate in one direction are "due" to deviate in the opposite direction to offset).
STATISTIC FOR REGRESSION | PERFORMANCE BEFORE PREDICTION | PERFORMANCE SINCE PREDICTION | WEEKS REMAINING |
---|---|---|---|
Yards per Carry | Group A had 42% more rushing yards per game | Group A has 10% more rushing yards per game | None (Loss) |
Yard-to-TD Ratio | Group A had 7% more points per game | Group B has 38% more points per game | None (Win) |
Passing Yards | Teams averaged 218.4 yards per game | Teams average 220.1 yards per game | 8 |
Interceptions Thrown | Group A threw 25% fewer interceptions | Group A threw 5% fewer interceptions | 2 |
There won't be much to say about our passing yards per game prediction until a bit later in the year. It's a good sign that the total hasn't started running away from us; as long as we're within a couple yards of where we started, we'll be well-positioned for the Autumn Wind to push team averages down to the lowest levels we've seen in over a decade.
Group A continues to throw fewer interceptions than Group B, but the data so far is a bit misleading; four Group A teams were on bye last week vs. none for Group B. If they'd all played and matched the rest of Group A's average, Group B would be slightly ahead right now. That bye advantage will reverse almost immediately, with three Group B teams sitting out next week compared to just one from Group A.
Still, I think Group A's performance so far has been fairly surprising. They averaged 0.58 interceptions per game at the time of the prediction and averaged 0.60 interceptions per game in the two weeks since. I expected them to regress more by this point.
I don't think it's anything that demands an explanation. Sometimes, random data behaves randomly, especially over small samples. I still expect matters to regress just as much as I did when I made the prediction. But I think when something is surprising, it's useful to be able to note that surprise. Many are tempted to search for explanations as to why something really shouldn't have been surprising in hindsight, but I think that those explanations tend to be overfitting and this instinct leads to worse predictions in the long run. It's good to just be surprised sometimes.
Let's Compare This Column To A Medical Condition...
Let's say you go to the doctor for your routine physical. Your doctor draws some blood to run some tests and then calls you the next day with some bad news. It seems you've tested positive for some condition or other. The doctor tells you it's pretty rare -- only one out of every 10,000 or so people have it-- but the test is 95% accurate. What are the odds you actually have the condition?
Most people will say 95% here, and most people will be way off. (Don't feel bad; many doctors get this wrong, too.) Your actual odds are about 0.2%. Now, 0.2% is not nothing. It's about a 1-in-500 shot. But I'd be a lot happier with those odds than the 95% I might have reflexively expected.
If the test is so accurate, why are the odds so low? It all comes down to a concept called "base rates".
Imagine giving this 95% accurate test to a million people. Since 1-in-10,000 people have this condition, we'd expect about 100 people to actually have the condition in question. Of those 100, 95 would (correctly) test positive and 5 would test negative (a 5% error rate). We'd also expect 999,900 people to not have the condition, and of that group, 949,905 would (correctly) test negative and 49,995 would test positive (a 5% error rate).
After testing those million people, we'd have 50,090 positive results, of which 95 would be "true positives", or around 0.2% of all positive tests. There are far more "false positive" test results simply because there are so many more people who don't have the condition than who do.
Does this still seem wild to you? Let's use my favorite logical technique to illustrate it better: reductio ad absurdum. Let's say that there's a condition called "Regressionitis" that absolutely, positively does not exist. It is a fake condition that I made up and as such no one in the known universe actually has it, because it's not real.
Now let's say I develop a test for Regressionitis that is 99.9999999% accurate. Let's say I administer this test to you and it comes back positive. What are the odds that you have Regressionitis? They're zero still, Regressionitis isn't real, nobody has it, for my "99.9999999% accurate test" I just generated a random number between one and a billion and if the number was 43 I told you that you had Regressionitis.
The odds you have a condition depend not just on the test's accuracy but on the condition's rarity. The rarer the condition is, the less likely you have it, even with a positive test.
This method of starting with your baseline odds and updating as new information comes in is called Bayesian inference, and it's one of the most powerful tools in our toolkit for estimating the means that everything is expected to regress to.
Continue reading this content with a PRO subscription.
"Footballguys is the best premium
fantasy football
only site on the planet."
Matthew Berry, NBC Sports EDGE