Welcome to Regression Alert, your weekly guide to using regression to predict the future with uncanny accuracy.
For those who are new to the feature, here's the deal: every week, I dive into the topic of regression to the mean. Sometimes I'll explain what it really is, why you hear so much about it, and how you can harness its power for yourself. Sometimes I'll give some practical examples of regression at work.
In weeks where I'm giving practical examples, I will select a metric to focus on. I'll rank all players in the league according to that metric and separate the top players into Group A and the bottom players into Group B. I will verify that the players in Group A have outscored the players in Group B to that point in the season. And then I will predict that, by the magic of regression, Group B will outscore Group A going forward.
Crucially, I don't get to pick my samples (other than choosing which metric to focus on). If I'm looking at receivers and Cooper Kupp is one of the top performers in my sample, then Cooper Kupp goes into Group A and may the fantasy gods show mercy on my predictions.
Most importantly, because predictions mean nothing without accountability, I track the results of my predictions over the course of the season and highlight when they prove correct and also when they prove incorrect. At the end of last season, I provided a recap of the first half-decade of Regression Alert's predictions. The executive summary is we have a 32-7 lifetime record, which is an 82% success rate.
If you want even more details, here's a list of my predictions from 2020 and their final results. Here's the same list from 2019 and their final results, here's the list from 2018, and here's the list from 2017.
The Scorecard
In Week 2, I broke down what regression to the mean really is, what causes it, how we can benefit from it, and what the guiding philosophy of this column would be. No specific prediction was made.
In Week 3, I dove into the reasons why yards per carry is almost entirely noise, shared some research to that effect, and predicted that the sample of backs with lots of carries but a poor per-carry average would outrush the sample with fewer carries but more yards per carry.
In Week 4 I discussed the tendency for touchdowns to follow yards and predicted that players scoring a disproportionately high or low amount relative to their yardage total would see significant regression going forward.
In Week 5, I revisited an old finding that preseason ADP tells us as much about rest-of-year outcomes as fantasy production to date does, even a quarter of the way through a new season. No specific prediction was made.
In Week 6, I explained the concept of "face validity" and taught the "leaderboard test", my favorite quick-and-dirty way to tell how much a statistic is likely to regress. No specific prediction was made.
In Week 7, I talked about trends in average margin of victory and tried my hand at applying the concepts of regression to a statistic I'd never considered before, predicting that teams would win games by an average of between 9.0 and 10.5 points per game.
In Week 8, I lamented that interceptions weren't a bigger deal in fantasy football given that they're a tremendously good regression target, and then I predicted interceptions would regress.
In Week 9, I explained why the single greatest weapon for regression to the mean is large sample sizes. For individual players, individual games, or individual weeks, regression might only be a 55/45 bet, but if you aggregate enough of those bets, it becomes a statistical certainty. No specific prediction was made.
In Week 10, I explored the link between regression and luck, noting that the more something was dependent on luck, the more it would regress, and predicted that "schedule luck" in the Scott Fish Bowl would therefore regress completely going forward.
In Week 11, I broke down the very important distinction between "mean reversion" (the tendency of players to perform around their "true talent level" going forward, regardless of how they have performed to date) and "gambler's fallacy" (the idea that overperformers or underperformers are "due" for a correction).
In Week 12, I talked about how much of a team's identity was really just random noise and small samples and projected that some of the most rush-heavy teams would skew substantially more pass-heavy going forward.
STATISTIC FOR REGRESSION | PERFORMANCE BEFORE PREDICTION | PERFORMANCE SINCE PREDICTION | WEEKS REMAINING |
---|---|---|---|
Yards per Carry | Group A had 24% more rushing yards per game | Group B has 25% more rushing yards per game | None (Win!) |
Yards per Touchdown | Group A scored 3% more fantasy points per game | Group A has 12% more fantasy points per game | None (Loss) |
Margin of Victory | Average margins were 9.0 points per game | Average margins are 9.9 points per game | None (Win!) |
Defensive INTs | Group A had 65% more interceptions | Group B has 50% more interceptions | None (Win!) |
Schedule Luck | Group A had 38% more wins | Group A had 4% more wins | None (Loss*) |
Offensive Identity | Group A had 12% more rushing TDs | Group A has 15% more passing TDs | 3 |
As we've seen, one week barely moves the needle on a prediction, but it's nice to get off to a positive start nonetheless. At the time of the prediction, our Group A teams had 15 more rushing than passing touchdowns. Last week they totaled two more passing than rushing touchdowns and nearly had a third, as Derrick Henry took a reception 69 yards before fumbling the football into the end zone where it was recovered for a score. (Fumble recovery touchdowns count as neither rushing nor passing for our purposes.)
How Right Do We Want To Be?
It's important to be right. One of the purposes of this column is to help you anticipate future trends so you can manage your team more intelligently, and to do that, my predictions need to be right. Another point of this column is to convince you that regression to the mean is a powerful and reliable force, and to do that, my predictions need to be right. (The third aim is to teach you how to predict regression for yourself; my predictions don't actually need to be right for that one, though it certainly doesn't hurt.)
Winning in fantasy football often comes down to being right more than the next manager. You decide between two players in the draft and if you're right, you'll score more points. You decide between two players in your weekly lineup and if you're right, you'll win more games. You decide to make a trade (or not), you decide to make a waiver claim (or not), and at the end of the year, the manager who made the most right decisions has the best chance of holding up the trophy. (Not a perfect chance, of course, because luck still abounds.)
Given all of that, you'd think a question like "how right do we want to be?" is nonsensical. We want to be maximally right. Obviously.
And on questions where we are forced to make a decision between limited options, that's true. If we have no choice but to choose, it is optimal to be right every single time. To crush every single draft pick, to nail every single start/sit choice.
But when our decision space is less bounded, when our choices include not just what to choose but whether to choose at all, the answer surprisingly changes. Sometimes it's possible to be too right. There are situations where a manager who is wrong more often will outcompete a manager who is wrong less often.
When You're Not Getting It Wrong Enough
I'll give the most common case I come across. I play in a lot of dynasty leagues where teams keep their players indefinitely. Once a guy is on your roster, he's yours until you cut him. Because of this, there are fewer paths to acquire new players; there's an annual draft for incoming rookies and unrostered free agents, but you might only get a few players a year out of it. You can add players off of waivers, but you're usually not finding regular contributors there. Because of this, if you have a need on your roster, it becomes more important in this format to trade with other leaguemates to fill it.
You would think that managers who are good traders would do well in this format, and you'd be right. But the best traders are not the managers who "win" trades most frequently. In fact, I would wager that a manager who wins 66% of his trades probably has a better long-term record than a manager who wins 90% of his trades.
Why? There's a saying in poker: "if you only get in good, you're not getting in enough". If you only buy into a pot when your odds of winning are 90%, you're leaving money on the table. You could find a hand where you had a 70% chance of winning and betting that hand would be a positive expected gain for you. You're expected to make money from playing that hand. But play a lot of 70% hands and you'll pull your win rate down from 90% in a hurry.
Similarly, if you're winning 90% of your trades, you're only open to taking trades that are wildly biased in your favor. That's fine, winning is good, but that means there are opportunities out there for trades that are only moderately biased in your favor and you're passing up on them. And even more opportunities for trades that are still mildly biased in your favor, and you're passing up on them, too. There's a profit just lying around on the street and you're too timid to bend over and pick it up.
When I evaluate my dynasty teams, I try to ensure I'm winning between 60-70% of my trades. Too much more than that, and it means I'm not being aggressive enough. I'm leaving money on the table. Too much less than that, and it means I'm being too reckless and need to dial it back.
When I write Regression Alert, I have a similar thought process. It would be fairly easy for me to put up a 90-100% success rate if that was my goal. When Justin Fields averages 160 rushing yards over back-to-back games, just predict that he won't do that again (because nobody in history has ever done that). When Tyreek Hill sets the record for receiving yards through nine games, predict he'll average fewer receiving yards over the second half (because nobody in history has ever maintained anywhere close to that pace).
But predictions like that aren't interesting. "Player who is doing what no player has ever done before will probably not keep doing it" is something that you don't really need me to tell you, and it's not going to do much to further my goals (helping your teams, instilling respect for regression's power, teaching how to spot it in the wild). If I'm never getting anything wrong, I'm never saying anything worth saying.
At the same time, I do want to get it right much more often than I get it wrong. Because again, I can't help your teams if I'm giving bad advice, I can't convince you about how cool regression is if I keep misidentifying it, and I can't teach you how to spot it if I'm not spotting it myself.
But that's why I'll make riskier or more dramatic predictions, why I'll make guesses about things I've never tried before, and so on. Because the worst that can happen is that I'm wrong, and it's important that that happen from time to time.
If you've never managed your teams this way, I'd urge you to consider it sometime. Society places tremendous pressure to be right all the time, even if you have to "fake it til you make it". You should never intentionally be wrong. But it's freeing to make decisions knowing that not only is it okay to be wrong on occasion. It's actually for the best. If you're never getting anything wrong, you're never doing anything interesting.