Regression Alert: Week 16

Adam Harstad's Regression Alert: Week 16 Adam Harstad Published 12/21/2023

Welcome to Regression Alert, your weekly guide to using regression to predict the future with uncanny accuracy.

For those who are new to the feature, here's the deal: every week, I dive into the topic of regression to the mean. Sometimes, I'll explain what it really is, why you hear so much about it, and how you can harness its power for yourself. Sometimes, I'll give some practical examples of regression at work.

In weeks where I'm giving practical examples, I will select a metric to focus on. I'll rank all players in the league according to that metric and separate the top players into Group A and the bottom players into Group B. I will verify that the players in Group A have outscored the players in Group B to that point in the season. And then I will predict that, by the magic of regression, Group B will outscore Group A going forward.

Crucially, I don't get to pick my samples (other than choosing which metric to focus on). If I'm looking at receivers and Justin Jefferson is one of the top performers in my sample, then Justin Jefferson goes into Group A, and may the fantasy gods show mercy on my predictions.

Most importantly, because predictions mean nothing without accountability, I report on all my results in real time and end each season with a summary. Here's a recap from last year detailing every prediction I made in 2022, along with all results from this column's six-year history (my predictions have gone 36-10, a 78% success rate). And here are similar roundups from 2021, 2020, 2019, 2018, and 2017.


The Scorecard

In Week 2, I broke down what regression to the mean really is, what causes it, how we can benefit from it, and what the guiding philosophy of this column would be. No specific prediction was made.

In Week 3, I dove into the reasons why yards per carry is almost entirely noise, shared some research to that effect, and predicted that the sample of backs with lots of carries but a poor per-carry average would outrush the sample with fewer carries but more yards per carry.

In Week 4, I explained that touchdowns follow yards, but yards don't follow touchdowns, and predicted that high-yardage, low-touchdown receivers were going to start scoring a lot more going forward.

In Week 5, we revisited one of my favorite findings. We know that early-season overperformers and early-season underperformers tend to regress, but every year, I test the data and confirm that preseason ADP is still as predictive as early-season results even through four weeks of the season. I sliced the sample in several new ways to see if we could find some split where early-season performance was more predictive than ADP, but I failed in all instances.

In Week 6, I talked about how when we're confronted with an unfamiliar statistic, checking the leaderboard can be a quick and easy way to guess how prone that statistic will be to regression.

In Week 7, I discussed how just because something is an outlier doesn't mean it's destined to regress and predicted that this season's passing yardage per game total would remain significantly below recent levels.

In Week 8, I wrote about why statistics for quarterbacks don't tend to regress as much as statistics for receivers or running backs and why interception rate was the one big exception. I predicted that low-interception teams would start throwing more picks than high-interception teams going forward.

In Week 9, I explained the critical difference between regression to the mean (the tendency for players whose performance had deviated from their underlying average to return to that average) and the gambler's fallacy (the belief that players who deviate in one direction are "due" to deviate in the opposite direction to offset).

In Week 10, I discussed not only finding stats that were likely to regress to their "true mean", but also how we could estimate what that true mean might be.

In Week 11, I explained why larger samples work to regression's benefit and made another yards per carry prediction.

In Week 12, I used a simple model to demonstrate why outlier performances typically require a player to be both lucky and good.

In Week 13, I talked about how a player's mean wasn't a fixed target and predicted that rookie performance would improve later in the season.

In Week 14, I mentioned that hot and cold streaks are mostly a mirage and that all players tend to regress strongly toward their full-season averages.

In Week 15, I looked at the disheartening finding that even the best teams only win a title 30-40% of the time.

Statistic Being Tracked Performance Before Prediction Performance Since Prediction Remaining Weeks
Yards Per Carry Group A had 42% more rushing yards/game Group A has 10% more rushing yards/game None (Loss)
Yard-to-TD Ratio Group A had 7% more points/game Group B has 38% more points/game None (Win)
Passing Yards Teams averaged 218.4 yards/game Teams average 218.7 yards/game 2
Interceptions Thrown Group A threw 25% fewer interceptions Group B has thrown 11% fewer interceptions None (Win)
Yards Per Carry Group A had 10% more rushing yards/game Group A has 19% more rushing yards/game None (Loss)
Rookie PPG Group A averaged 9.05 ppg Group A averages 9.69 ppg 1
Rookie Improvement 57% are beating their prior average 1
"Hot" Players Regress Players were performing at an elevated level Players have regressed 120% to season avg. 2

Several months ago, I noted that passing yards per game was at its lowest level in over a decade and predicted that it would get even lower still, but I said we'd likely need the winter weather to get there. After yards per game rebounded slightly for a while, it's finally creeping back down; it's now within three-tenths of a yard of its original target with two weeks to go. The prediction will likely be a close one, but we're well-positioned for the stretch run.

Despite stacking the deck as much as possible against our rookies, they continue to improve from their early-season form. Since our prediction, a rookie receiver ranks 5th (Rashee Rice), another ranks 10th (Puka Nacua), and a third ranks 17th (Jayden Reed); they're looking like potential league-winners in the fantasy playoffs. (This tendency for rookies to improve late in the year when the games matter the most is why I love drafting them in August.)

Finally, our "hot" players have not just regressed to their full-season average of 12.08 points per game, but they've regressed right past it to 11.24 ppg. If you thought that Trey McBride or Deebo Samuel's hot streak entering the playoffs meant that they'd carry your team once you got there... well, so far, you'd be right. But if you thought the same about Dak Prescott, Jaylen Warren, Dalton Kincaid, DK Metcalf, Mike Evans, or most other players, you've probably seen nothing but disappointment to this point.

Only 10 of the original 33 players have been producing closer to their hot streak than their season average over the last two weeks, and two of those ten are rookie receivers from the prediction above (Rice and Reed).


How Right Do We Want To Be?

It's important to be right. One of the purposes of this column is to help you anticipate future trends so you can manage your team more intelligently. To do that, my predictions need to be right. Another point of this column is to convince you that regression to the mean is a powerful and reliable force. To do that, my predictions need to be right. (The third aim is to teach you how to predict regression for yourself; my predictions don't actually need to be right for that one, though it certainly doesn't hurt.)

In fantasy football, winning often comes down to being right more than the next manager. You decide between two players in the draft, and if you're right, you'll score more points. You decide between two players in your weekly lineup, and if you're right, you'll win more games. You decide to make a trade (or not), you decide to make a waiver claim (or not), and at the end of the year, the manager who made the most right decisions has the best chance of holding up the trophy. (Not a very good chance, as we've discussed recently, but still the best chance.)

Given all that, you'd think a question like "How right do we want to be?" would be nonsensical. We want to be maximally right, obviously.

And on questions where we are forced to make a decision between limited options, that's true. If we have no choice but to choose, it is optimal to be right every single time. To crush every single draft pick, to nail every single start/sit choice.

But when our decision space is less bounded, when our choices include not just what to choose but whether to choose at all, the answer surprisingly changes. Sometimes it's possible to be too right. There are situations where a manager who is wrong more often will outcompete a manager who is wrong less often.

Already a subscriber?

Continue reading this content with a PRO subscription.

When You're Not Getting It Wrong Enough

I'll give the most common case I come across. I play in a lot of dynasty leagues where teams keep their players indefinitely. Once a guy is on your roster, he's yours until you cut (or trade) him. Because of this, there are fewer paths to acquire new players; there's an annual draft for incoming rookies and unrostered free agents, but you might only get a few players a year out of it. You can add players off of waivers, but you're usually not finding regular contributors there. Because of this, if you have a need on your roster, it becomes more important in this format to trade with other leaguemates to fill it.

You would think that managers who are good traders would do well in this format, and you'd be right. But the best traders are not the managers who "win" trades most frequently. In fact, I would wager that a manager who wins 66% of their trades probably has a better long-term record than a manager who wins 80% of their trades.

Why? There's a saying in poker: "If you only get in good, you're not getting in enough." If you only buy into a pot when your odds of winning are 80%, you're leaving money on the table. You could find a hand where you had a 70% chance of winning, and betting that hand would be a positive expected gain for you. You're expected to make money from playing that hand. But play a lot of 70% hands, and you'll pull your win rate down from 80% in a hurry.

Similarly, if you're winning 80% of your trades, you're only open to taking trades that are wildly biased in your favor. That's fine, winning is good, but that means there are opportunities out there for trades that are only moderately biased in your favor, and you're passing up on them. And even more opportunities for trades that are still mildly biased in your favor, and you're passing up on them, too. There's a profit just lying around on the street, and you're too timid to bend over and pick it up.

When I evaluate my dynasty teams, I try to ensure I'm winning between 60-70% of my trades. Too much more than that, and it means I'm not being aggressive enough. I'm leaving money on the table. Too much less than that, and it means I'm being too reckless and need to dial it back.

(Randall Munroe of XKCD has made a similar observation about product ratings online. High ratings are good... unless they're too high, in which case there are not enough ratings.)

When I write Regression Alert, I have a similar thought process. It would be fairly easy for me to put up a 90-100% success rate if that was my goal. When Will Levis throws for four touchdowns in his first start, just predict that he won't do that again (because four-touchdown games are extremely rare from rookies). When Tyreek Hill sets a record for receiving yards through eight games, predict he'll average fewer receiving yards over the second half (because nobody in history has ever maintained anywhere close to that pace). When Sam Howell takes 40 sacks in his first eight games, predict he'll have fewer going forward (because human bodies aren't built to survive that kind of abuse).

But predictions like that aren't interesting. "Player who is doing what no player has ever done before will probably not keep doing it" is something that you don't really need me to tell you, and it's not going to do much to further my goals (helping your teams, instilling respect for regression's power, teaching how to spot it in the wild). If I'm never getting anything wrong, I'm never saying anything worth saying.

At the same time, I do want to get it right much more often than I get it wrong. Because again, I can't help your teams if I'm giving bad advice, I can't convince you about how cool regression is if I keep misidentifying it, and I can't teach you how to spot it if I'm not spotting it myself.

That's why I'll make riskier or more dramatic predictions, such as predicting a 40+% swing in rushing yards per game or leaving Tank Dell in the rookie sample after he gets injured just to make the remaining rookies' task that much harder. It's why I make guesses about things I've never tried before. Because the worst that can happen is that I'm wrong, and it's important that that happen from time to time.

Of course, there are costs to being wrong. If this season were my first writing about regression to the mean, making two predictions about yards per carry and failing them both would probably spook me a bit. It might discourage me from making further predictions. This would be a mistake (as my nine successful yards per carry predictions and the size of the average reversal demonstrate). "One should be comfortable with the possibility of being wrong" requires a certain level of confidence in your process.

(Of course, the opposite can be risky, too. If your process is bad, being wrong is a signal that you should change it, and being too confident can cause you to ignore that signal. This is not a hobby that lends itself to easy solutions.)

But if you've never managed your teams with this philosophy, I'd urge you to consider it sometime. Society places tremendous pressure to be right all the time, even if you have to "fake it til you make it." You should never intentionally be wrong. But it's freeing to make decisions knowing that not only is it okay to be wrong on occasion. You can take a calculated risk knowing that it might not pay off, and if it doesn't, you can accept the outcome without beating yourself up. It's actually for the best. If you're never getting anything wrong, you're never doing anything interesting.

Photos provided by Imagn Images

More by Adam Harstad

 

Dynasty, in Theory: Do the Playoffs Matter?

Adam Harstad

Should we include playoff performances when evaluating players?

01/18/25 Read More
 

Odds and Ends: Divisional Round

Adam Harstad

Examining past trends to predict the future.

01/17/25 Read More
 

Odds and Ends: Wild Card Weekend

Adam Harstad

Examining the playoff futures and correctly predicting the Super Bowl winner.

01/10/25 Read More
 

Dynasty, in Theory: Evaluating Rookie Receivers

Adam Harstad

Revisiting this year's rookies through the lens of the model

01/09/25 Read More
 

Dynasty, in Theory: Consistency is a Myth

Adam Harstad

Some believe consistency helps you win. (It doesn't.)

01/04/25 Read More
 

Odds and Ends: Week 18

Adam Harstad

How did we do for the year? Surprisingly well!

01/02/25 Read More