Welcome to Regression Alert, your weekly guide to using regression to predict the future with uncanny accuracy.
For those who are new to the feature, here's the deal: every week, I dive into the topic of regression to the mean. Sometimes, I'll explain what it really is, why you hear so much about it, and how you can harness its power for yourself. Sometimes, I'll give some practical examples of regression at work.
In weeks where I'm giving practical examples, I will select a metric to focus on. I'll rank all players in the league according to that metric and separate the top players into Group A and the bottom players into Group B. I will verify that the players in Group A have outscored the players in Group B to that point in the season. And then I will predict that, by the magic of regression, Group B will outscore Group A going forward.
Crucially, I don't get to pick my samples (other than choosing which metric to focus on). If I'm looking at receivers and Justin Jefferson is one of the top performers in my sample, then Justin Jefferson goes into Group A, and may the fantasy gods show mercy on my predictions.
Most importantly, because predictions mean nothing without accountability, I report on all my results in real time and end each season with a summary. Here's a recap from last year detailing every prediction I made in 2022, along with all results from this column's six-year history (my predictions have gone 36-10, a 78% success rate). And here are similar roundups from 2021, 2020, 2019, 2018, and 2017.
The Scorecard
In Week 2, I broke down what regression to the mean really is, what causes it, how we can benefit from it, and what the guiding philosophy of this column would be. No specific prediction was made.
In Week 3, I dove into the reasons why yards per carry is almost entirely noise, shared some research to that effect, and predicted that the sample of backs with lots of carries but a poor per-carry average would outrush the sample with fewer carries but more yards per carry.
In Week 4, I explained that touchdowns follow yards, but yards don't follow touchdowns, and predicted that high-yardage, low-touchdown receivers were going to start scoring a lot more going forward.
In Week 5, we revisited one of my favorite findings. We know that early-season overperformers and early-season underperformers tend to regress, but every year, I test the data and confirm that preseason ADP is still as predictive as early-season results even through four weeks of the season. I sliced the sample in several new ways to see if we could find some split where early-season performance was more predictive than ADP, but I failed in all instances.
In Week 6, I talked about how when we're confronted with an unfamiliar statistic, checking the leaderboard can be a quick and easy way to guess how prone that statistic will be to regression.
In Week 7, I discussed how just because something is an outlier doesn't mean it's destined to regress and predicted that this season's passing yardage per game total would remain significantly below recent levels.
In Week 8, I wrote about why statistics for quarterbacks don't tend to regress as much as statistics for receivers or running backs and why interception rate was the one big exception. I predicted that low-interception teams would start throwing more picks than high-interception teams going forward.
In Week 9, I explained the critical difference between regression to the mean (the tendency for players whose performance had deviated from their underlying average to return to that average) and the gambler's fallacy (the belief that players who deviate in one direction are "due" to deviate in the opposite direction to offset).
In Week 10, I discussed not only finding stats that were likely to regress to their "true mean", but also how we could estimate what that true mean might be.
STATISTIC FOR REGRESSION | PERFORMANCE BEFORE PREDICTION | PERFORMANCE SINCE PREDICTION | WEEKS REMAINING |
---|---|---|---|
Yards per Carry | Group A had 42% more rushing yards per game | Group A has 10% more rushing yards per game | None (Loss) |
Yard-to-TD Ratio | Group A had 7% more points per game | Group B has 38% more points per game | None (Win) |
Passing Yards | Teams averaged 218.4 yards per game | Teams average 221.0 yards per game | 7 |
Interceptions Thrown | Group A threw 25% fewer interceptions | Group B has thrown 15% fewer interceptions | 1 |
There won't be much to say about our passing yards per game prediction until a bit later in the year. It's a good sign that the total hasn't started running away from us; as long as we're within a couple of yards of where we started, we'll be well-positioned for the Autumn Wind to push team averages down to the lowest levels we've seen in over a decade.
I'd been surprised by how stubborn interception rates had been and noted last week that Group B had a significant bye advantage in Week 10 (after Group A had one in Week 9). We needed Group B to firmly take the lead if we were to have any hope of winning this prediction, but fortunately, that's exactly what happened. Things are tighter than I'd like heading into the final week, but it's nice that Group B at least has a little head start.
Regression and Large Samples
One of the key features of regression to the mean is that outlier performances are significantly more likely over small samples. If I flip a coin that's weighted to land on heads 60% of the time, that means there's still a 40% chance it lands on tails. Given those odds, landing on tails wouldn't be very surprising at all. But if I flipped the same coin a million times, the odds of seeing Tails come up more often than Heads dwindles down to nothing.
This idea that variance evens out over larger samples is one of the key insights in fantasy football. Why do top DFS players compete with so many different lineups every week? The answer is not, as is commonly believed, because it increases their expected return on investment. Indeed, every DFS player has a "best" lineup, a lineup that they think is most likely to win that week, and every other lineup that player submits actually decreases expected payout (because it's a worse lineup than the best lineup).
So why submit so many different lineups? Because outlier performances are significantly more likely over small samples. By using 20 lineups in a week, top players reduce the amount of money they'd be expected to win, but they also reduce the chances of a single injury or bad performance wiping out their entire bankroll, and that's a worthwhile trade.
In larger tournaments, they might estimate their best lineup has a 1% chance of winning while their second-best lineup only has a 0.9% shot. But if they only played the 1% lineup, it would take 69 weeks (about four years) before they passed the point where their odds of winning at least one tournament were greater than 50%. By using both the 1% and 0.9% lineup, that falls to 37 weeks. If they added a lineup with a 0.8% chance of winning, that falls to 26 weeks. The lineups might be worse, but the more you enter, the quicker luck tends to strike.
(Of course, larger samples reduce variance in both directions. A DFS player who submits 20 lineups is far less likely to lose their entire bankroll, but they're far less likely to double it, too. And multi-entering big tournaments reduces the time until they finally win one, but greatly increases the cost, decreasing the proportionate return on investment.)
Why is it that three weeks at the beginning of the year don't give us enough information to outperform ADP, but five weeks do? Because three weeks is too small of a sample for the outlier performances to have all washed out sufficiently, and five weeks is not.
This is why the preferred practice around here is to select groups of players or teams to compare in our predictions. If we selected a single player, we'd be wrong much more often (much like betting heads on a weighted coin will still lose 40% of the time). By bundling players into similar groups of five or ten, it becomes much more likely that we'll see the overall pattern emerge.
This is also why the preferred practice around here is to let predictions run for four weeks. I'd love to let them run for even longer, sometimes, but a clearly defined endpoint is critical for accountability, to prevent me from just running the prediction until Group B pulls ahead and then immediately closing it and declaring it a success.
The fact that outliers are more common on smaller samples tends to manifest in our results over time, too. We usually see Group B take its biggest lead in the week or two after the prediction and then watch that lead shrink over the remaining weeks, for instance. At the same time, when Group B does trail, it likewise typically does so in the week or two immediately after the prediction before pulling back ahead in Weeks 3 and 4.
Indeed, I always report the size of the lead in either direction (that darn accountability thing again), but this is usually misleading. A 20% lead over four weeks is significantly more impressive (as in "less likely to happen by chance alone / more likely to be a result of genuine signal") than a 30% lead over one week. (Consider: if Group B leads Group A by 30% after one week, then a 30% edge by Group A in the next week will erase that lead. If Group B leads Group A by 20% after four weeks, then a 30% edge by Group A in the next week will still leave Group B 10% ahead, provided all sample sizes are roughly equal.)
Here's another illustration of the impact of small differences over large samples. For those of you who have been watching football for long enough, you probably remember the 2004 NFL season. The 2003 season closed with the New England Patriots beating the Indianapolis Colts 24-14 in a game that wasn't as close as the final score might suggest. The Colts complained that the Patriots' defensive backs were hitting receivers more than five yards beyond the line of scrimmage, which violated the rules as written, and the referees let it slide.
Over the offseason, the NFL's competition committee decided it would place a "point of emphasis" on ensuring officiating crews began calling contact downfield in line with the rules as written. NFL defenses adjusted by being less physical in coverage and passing offenses exploded, setting numerous records, headlined by Peyton Manning's own 49-touchdown season. After 2004, the NFL quietly dropped the point of emphasis, officiating crews went back to letting contact six or seven yards downfield slide, and offenses dropped off again.
In 2003, the league-wide average for yards per pass attempt was 6.6. In 2004, it spiked all the way to 7.1. In 2005, it fell back down to 6.8. (For context, yards per attempt in each of the last three seasons has been 7.0, 7.1, and 7.2; the 2004 season was essentially 15 years ahead of its time.)
That was it. Three-tenths, five-tenths of a yard per pass attempt, that was the difference between a stifling defensive environment and a wide-open offensive environment. When an offense dropped back to pass in 2004, the result was approximately 7.5% better than when it dropped back to pass in 2003.
On a player level, a 0.3-0.5 extra yards per attempt isn't a big difference. So far this season, Jared Goff leads Patrick Mahomes II by 0.4 yards per attempt, though I suspect most would say Mahomes has been the better player.
But that's a difference of half a yard on a couple of hundred attempts. This was a difference of half a yard... over 16,354 attempts. Despite attempting 139 fewer passes, the league as a whole passed for an extra 5169 yards in 2004. It basically conjured an entire 1984 Dan Marino out of thin air. That half yard per attempt was a massive change given the sheer number of attempts in question. That small difference was much more impressive over a large sample.
What does this mean for us? It means if we want to make a sure profit in fantasy football by betting on regression to the mean, we're going to need to place a lot of bets. Trading away one player or acquiring another simply because they have a profile that suggests regression is a positive move in expectation, but the range of possible outcomes is massive. It could work out really well, it could work out terribly. Weighted coins still flip tails sometimes.
It also means that the more bets we place on regression to the mean, the more our upside becomes capped. With larger samples, the odds of hitting big on every bet decline. The larger our sample, the smaller our potential rate of return.
But just like top DFS players, when you have a genuine edge, it can often make sense to turn to safer profits over gambling on the potential to strike it big while leaving yourself fully exposed to the consequences if the flip doesn't go your way.
Betting on Yards Per Carry Again
Earlier this year, we bet on yards per carry regressing, and our prediction came up just short, the first time we've ever made that particular bet and lost. That stuck in my craw and I'm itching at a chance to restart our winning streak, so let's run it back.
Right now, eighteen running backs have rushed for at least 500 yards on the season. If we sort by yards per carry, the Top 5 are Raheem Mostert, James Cook, Christian McCaffrey, Bijan Robinson, and Breece Hall. These backs average 13.1 carries for 66.0 yards, a 5.05 ypc average. This is our Group A.
The Bottom 5 are Gus Edwards, Travis Etienne, Joe Mixon, Tony Pollard, and Josh Jacobs. These backs average 15.7 carries for 59.7 yards, a 3.80 average. This is our Group B.
Our samples are a little bit smaller than I'd prefer, but I wanted to focus more on top-end starters this time. And besides, we can't possibly lose this prediction twice in a single year, can we? (In fact, we can.) To this point, Group A has outrushed Group B by 11% per game. Through the magic of regression, I predict that Group B will average more rushing yards per game over the next four weeks.