Regression Alert: Week 11

Just because players regress doesn't mean they're "due".

Adam Harstad's Regression Alert: Week 11 Adam Harstad Published 11/14/2024

© Nathan Ray Seebeck-Imagn Images

For those who are new to the feature, here's the deal: every week, I break down a topic related to regression to the mean. Some weeks, I'll explain what it is, how it works, why you hear so much about it, and how you can harness its power for yourself. In other weeks, I'll give practical examples of regression at work.

In weeks where I'm giving practical examples, I will select a metric to focus on. I'll rank all players in the league according to that metric and separate the top players into Group A and the bottom players into Group B. I will verify that the players in Group A have outscored the players in Group B to that point in the season. And then I will predict that, by the magic of regression, Group B will outscore Group A going forward.

Crucially, I don't get to pick my samples (other than choosing which metric to focus on). If I'm looking at receivers and Justin Jefferson is one of the top performers in my sample, then Justin Jefferson goes into Group A, and may the fantasy gods show mercy on my predictions.

And then because predictions are meaningless without accountability, I track and report my results. Here's last year's season-ending recap, which covered the outcome of every prediction made in our seven-year history, giving our top-line record (41-13, a 76% hit rate) and lessons learned along the way.


Our Year to Date

Sometimes, I use this column to explain the concept of regression to the mean. In Week 2, I discussed what it is and what this column's primary goals would be. In Week 3, I explained how we could use regression to predict changes in future performance-- who would improve, who would decline-- without knowing anything about the players themselves. In Week 7, I explained why large samples are our biggest asset when attempting to benefit from regression. In Week 9, I gave a quick trick for evaluating whether unfamiliar statistics are likely stable or unstable.

Sometimes, I point out broad trends. In Week 5, I shared twelve years worth of data demonstrating that preseason ADP held as much predictive power as performance to date through the first four weeks of the season.

Other times, I use this column to make specific predictions. In Week 4, I explained that touchdowns tend to follow yards and predicted that the players with the highest yard-to-touchdown ratios would begin outscoring the players with the lowest. In Week 6, I explained that yards per carry was a step away from a random number generator and predicted the players with the lowest averages would outrush those with the highest going forward.

In Week 8, I broke down how teams with unusual home/road splits usually performed going forward and predicted the Cowboys would be better at home than on the road for the rest of the season. In Week 10, I explained why interceptions varied so much from sample to sample and predicted that the teams throwing the fewest interceptions would pass the teams throwing the most.

The Scorecard

Statistic Being TrackedPerformance Before PredictionPerformance Since PredictionWeeks Remaining
Yard-to-TD RatioGroup A averaged 17% more PPGGroup B averages 10% more PPGNone (Win!)
Yards per carryGroup A averaged 22% more yards per gameGroup B averages 38% more yards per gameNone (Win!)
Cowboys Point DifferentialCowboys were 90 points better on the road than at homeCowboys are 16 points better on the road than at home7
Team InterceptionsGroup A threw 58% as many interceptionsGroup B has thrown 41% as many interceptions3

The Cowboys put Dak Prescott on injured reserve and it's possible that he took the rest of the offense with him; against the Eagles, Dallas finished with fewer than 50 net passing yards. This is quite bad for our prediction. At the time we made it, the team had five home and five road games remaining in the sample. Because of the order, though, 40% of those road games will come with Dak Prescott under center, compared to 0% of the home games.

Ordinarily, this is why we prefer to bundle our predictions into groups-- over larger samples, the unlucky breaks will hit Group A at about the same rate that they hit Group B, and everything evens out. But we do like to try something different on occasion, and while there are still seven weeks to go, this might be a situation where it bites us.

Our interception prediction is faring much better so far. There was an error in my math last week-- I said that Group B led Group A in interceptions 89 to 64, but the real lead was 111 to 64. This will make it more difficult for Group B to flip the result. I worried about it for much of last week... until Sunday Night Football, when the "low-interception" Lions combined with the "low-interception" Texans to throw 7 picks, as many as all Group B teams combined.


Gambler's Fallacy and Regression to the Mean

Before we start, a quick quiz: 

Imagine a receiver plays especially well over the first eight games of a sixteen-game season, averaging 100 yards per game (on pace for 1600 total). Imagine that we also happen to know this player is overperforming; his "true mean" performance level is just 80 yards per game. How many yards per game should we expect this receiver to average at the end of the year?

We'll get to the answer in a bit.


The goal of this column is to convince you to view regression to the mean as a force of nature, implacable and inevitable, a mathematical certainty. I can generate a list of players and, without knowing a single thing about any of them, predict which ones will perform better going forward and which will perform worse. I like to say that I don't want any analysis in this column to be beyond the abilities of a moderately precocious 10-year-old.

But it's important that we give regression to the mean as much respect as it deserves... and not one single solitary ounce more. 

This is difficult because regression is essentially the visible arm of random variation, and our brains are especially bad at dealing with genuine randomness. We're just not wired that way. We see patterns in everything. There's even a name for this hardwired tendency to "discover" patterns in random data: Apophenia.

A fun example of apophenia is pareidolia, or the propensity to "see" faces in random places. Our ancestors used to tell stories of the "Man in the Moon". We... type silly faces to communicate emotion over the internet. Yes, pareidolia is why I can type a colon and a close paren and you'll immediately know that I'm happy and being playful. :)

Our ability to "see" these faces is surprisingly robust. -_- is just three short lines, and not only do most people see a face, they also mentally assign it a specific mood. '.' works as well. With small changes, I can convey massive differences in that mood. (^.^) and (v.v) are remarkably similar, yet the interpreted moods are drastically different.

Already a subscriber?

Continue reading this content with a PRO subscription.

 Another less-endearing manifestation of apophenia is formally called the gambler's fallacy (and informally called "the reason Las Vegas keeps building bigger casinos"). We look at random sequences of events and instead of seeing faces, we see trends. A roulette wheel might land on 7 three times in six spins, and suddenly we think the number 7 is "hot". Or a wheel might not land on 00 for three hundred straight spins, and now we believe that 00 is "due". But randomness doesn't work that way; the odds of a roulette wheel landing on a number when it's "hot" are exactly the same as the odds of it landing on that number when it's "cold" (1 in 38 on an American-style "double zero" roulette wheel).

It's very tempting to see regression to the mean as the universe's enforcement mechanism for the gambler's fallacy. In his rookie year, Brock Purdy threw an interception on 2.4% of his throws. Last year, he didn't throw his first interception until Week 6, posting a 0.6% interception rate through that span. After, his interception rate spiked dramatically, and he finished the season averaging an interception on... 2.5% of his throws, right on his rookie average. (This year he again averages a 2.5% interception rate.)

Score one for the universe.

Except Purdy only got there because he threw an interception on 3.6% of his throws after Week 6, a value that is likely much higher than his true mean and every bit as unlikely as the interception-free start to his season. It wasn't "fate" that he'd go on such a streak, the universe doesn't have scales that it's secretly trying to balance. And even our entire framing is infected with our desire to see patterns; why was he only "due" to regress after his sixth game? Why not after his third? Why not after his tenth?

Imagine a six-year veteran player who averages one touchdown per game for his career-- let's call it 96 touchdowns in 96 games. (This number is implausible and would likely regress, but let's pretend it's stable to keep the math simple.) Imagine he scores 0 touchdowns over the first four games of the season; how many touchdowns would you expect him to score over the next four? Imagine instead that he scores 8 touchdowns over the first four games of the season; how many touchdowns would you expect him to score over the next four?

If you think he's "due" for extra touchdowns to make up for his cold start or "due" for a touchdown-less streak after his hot start, you're falling prey to the gambler's fallacy. A player's "true touchdown rate" after a long cold streak is exactly the same as his "true touchdown rate" after a long hot streak. Regression to the mean doesn't magically force cold streaks to follow hot streaks to restore balance to the universe. In fact, a player is just as likely to follow up a hot streak with another hot streak as he is to follow it with a cold streak. The answer to both questions should be "four expected touchdowns over the next four games".

(Actually, if you want to be extra precise, after a cold start, the player in question will now have scored 96 touchdowns in 100 games, so the expectation going forward should be 0.96 touchdowns per game or about 3.8 touchdowns over his next four games. Similarly, after a hot start, the player will be averaging 1.04 touchdowns per game for his career, which is about 4.2 touchdowns over the next four games in expectation; hot streaks and cold streaks should cause us to revise our estimate of a player's "true" performance level.)

Which brings us to our question at the top. The receiver's "true mean" might be 80 yards per game, but we shouldn't expect him to finish this year averaging 80 yards per game; he's already "banked" that overperformance over the first half of the season, that's not going anywhere. We should expect him to average 80 yards per game over the second half, which (combined with his average from the first half) results in an average of 90 yards per game for the whole season.

In order to average 80 yards per game for the full season, the receiver would need to average just 60 yards per game the rest of the way. But that's an equally unlikely outcome (given that we know his "true mean" is 80). No one is ever "due" for bad games just because they're playing well.

This burning need to find patterns whether any patterns exist or not, can be a real hindrance in fantasy football. When we see a player on a lucky streak, we'll think he's "hot" and his luck will continue going forward. Or we'll think he's "due" and his luck will reverse going forward.

But the universe, the very nature of randomness itself, is unimpressed by our expectations. This is why many smart analysts prefer the term "reversion to the mean" instead of "regression to the mean", because it doesn't imply any specific directional force. When a player is coming off a particularly lucky stretch, the most likely result isn't another lucky stretch. And it's not an unlucky stretch, either. The expectation instead should be neutral luck. Just because a player's performance deviated from expectations doesn't mean we should expect more deviations.

 

Photos provided by Imagn Images

More by Adam Harstad

 

Dynasty, in Theory: Do the Playoffs Matter?

Adam Harstad

Should we include playoff performances when evaluating players?

01/18/25 Read More
 

Odds and Ends: Divisional Round

Adam Harstad

Examining past trends to predict the future.

01/17/25 Read More
 

Odds and Ends: Wild Card Weekend

Adam Harstad

Examining the playoff futures and correctly predicting the Super Bowl winner.

01/10/25 Read More
 

Dynasty, in Theory: Evaluating Rookie Receivers

Adam Harstad

Revisiting this year's rookies through the lens of the model

01/09/25 Read More
 

Dynasty, in Theory: Consistency is a Myth

Adam Harstad

Some believe consistency helps you win. (It doesn't.)

01/04/25 Read More
 

Odds and Ends: Week 18

Adam Harstad

How did we do for the year? Surprisingly well!

01/02/25 Read More