Regression Alert: Week 10

Adam Harstad's Regression Alert: Week 10 Adam Harstad Published 11/10/2022

Welcome to Regression Alert, your weekly guide to using regression to predict the future with uncanny accuracy.

For those who are new to the feature, here's the deal: every week, I dive into the topic of regression to the mean. Sometimes I'll explain what it really is, why you hear so much about it, and how you can harness its power for yourself. Sometimes I'll give some practical examples of regression at work.

In weeks where I'm giving practical examples, I will select a metric to focus on. I'll rank all players in the league according to that metric and separate the top players into Group A and the bottom players into Group B. I will verify that the players in Group A have outscored the players in Group B to that point in the season. And then I will predict that, by the magic of regression, Group B will outscore Group A going forward.

Crucially, I don't get to pick my samples (other than choosing which metric to focus on). If I'm looking at receivers and Cooper Kupp is one of the top performers in my sample, then Cooper Kupp goes into Group A and may the fantasy gods show mercy on my predictions.

Most importantly, because predictions mean nothing without accountability, I track the results of my predictions over the course of the season and highlight when they prove correct and also when they prove incorrect. At the end of last season, I provided a recap of the first half-decade of Regression Alert's predictions. The executive summary is we have a 32-7 lifetime record, which is an 82% success rate.

If you want even more details, here's a list of my predictions from 2020 and their final results. Here's the same list from 2019 and their final results, here's the list from 2018, and here's the list from 2017.


The Scorecard

In Week 2, I broke down what regression to the mean really is, what causes it, how we can benefit from it, and what the guiding philosophy of this column would be. No specific prediction was made.

In Week 3, I dove into the reasons why yards per carry is almost entirely noise, shared some research to that effect, and predicted that the sample of backs with lots of carries but a poor per-carry average would outrush the sample with fewer carries but more yards per carry.

In Week 4 I discussed the tendency for touchdowns to follow yards and predicted that players scoring a disproportionately high or low amount relative to their yardage total would see significant regression going forward.

In Week 5, I revisited an old finding that preseason ADP tells us as much about rest-of-year outcomes as fantasy production to date does, even a quarter of the way through a new season. No specific prediction was made.

In Week 6, I explained the concept of "face validity" and taught the "leaderboard test", my favorite quick-and-dirty way to tell how much a statistic is likely to regress. No specific prediction was made.

In Week 7, I talked about trends in average margin of victory and tried my hand at applying the concepts of regression to a statistic I'd never considered before, predicting that teams would win games by an average of between 9.0 and 10.5 points per game.

In Week 8, I lamented that interceptions weren't a bigger deal in fantasy football given that they're a tremendously good regression target, and then I predicted interceptions would regress.

In Week 9, I explained why the single greatest weapon for regression to the mean is large sample sizes. For individual players, individual games, or individual weeks, regression might only be a 55/45 bet, but if you aggregate enough of those bets, it becomes a statistical certainty. No specific prediction was made.

STATISTIC FOR REGRESSION PERFORMANCE BEFORE PREDICTION PERFORMANCE SINCE PREDICTION WEEKS REMAINING
Yards per Carry Group A had 24% more rushing yards per game Group B has 25% more rushing yards per game None (Win!)
Yards per Touchdown Group A scored 3% more fantasy points per game Group A has 12% more fantasy points per game None (Loss)
Margin of Victory Average margins were 9.0 points per game Average margins are 10.5 points per game 1
Defensive INTs Group A had 65% more interceptions Group B has 56% more interceptions 2

After a worrying couple of weeks, our "average margin" prediction is very well-positioned to win. How well-positioned? The original prediction was "average margin of victory over the next four weeks would be between 9 and 10.5 points". Thanks to an exceptionally close week (winners won by just 8.5 points per game, on average), the margin is right on 10.5 points, which means if it comes in under that mark we'll win, and if it comes in over we'll lose. Things are still very much up in the air, but for a prediction that was a little outside our usual comfort zone, results are encouraging.

To this point in the season, defenses are averaging 0.75 interceptions per game. When we made our prediction, the "high-interception" sample was averaging 1.28, the "medium-interception" sample was averaging 0.88, and the "low-interception" sample was averaging 0.38. Since interceptions are dominated by luck, we expected all of those numbers to strongly regress toward the league average, and indeed, in the three weeks since the "high-interception" sample averages 0.82, the "medium-interception" sample averages 0.68, and the "low-interception" sample averages 0.70.

The "high-interception" group maintains a very slight edge, but given that all three groups are averaging totals that are difficult to distinguish from league average, our "low-interception" group's total volume advantage remains substantial.


Luck Be a Lady...

I often like to stress that outcomes are the result of a combination of intrinsic factors and random chance. If a running back has a huge rushing day, there are plenty of contributors, including how fast that running back is, how good he is at breaking tackles, how well he read his blocking, how good the defense he faced was, how well his teammates were playing, whether the other running backs on the roster were healthy or hurt, what situations he was given his opportunities in (you can't run for 80 yards when you're getting a carry at the 50-yard line, after all), and numerous other factors.

Some of those factors are pretty stable, usually because they're intrinsic to the player himself. Guys who are good at breaking tackles in one game tend to be good at breaking tackles in the next game, too. Other factors are essentially just random chance. Players very rarely face the same defense in consecutive weeks, for instance.

The more an outcome is driven by intrinsic factors, the less it will regress between samples. The best example of this would be something that is 100% intrinsic, like height. If you take the five tallest players and the five shortest players in one sample, they're going to remain the five tallest and five shortest players in the next sample. They're not going to "regress" until they're all league-average height or anything. The gap between the best and the worst remains static from one sample to the next. Height is entirely intrinsic.

On the other hand, the more a factor is driven by luck, the more it will regress between samples. The best example would be something that is 100% luck-based, like correctly calling the pre-game coin flip. If one team has won eight consecutive coin flips and another team has lost eight consecutive coin flips, they'd still have the exact same expectation for how many of the next eight coin flips were likely to go in their favor. The gap between the "best" and the "worst" disappears between samples. Coin flips are entirely chance.

There's not really anything interesting in football that isn't a blend of intrinsic factors and random chance, though. As a result, from one sample to the next the gap doesn't stay the same, but it doesn't disappear entirely, either. Instead it usually just shrinks a bit. (How much depends entirely on the ratio of intrinsic factors to random chance.)

Already a subscriber?

Continue reading this content with a PRO subscription.

The same is not true for fantasy football.There is a very common and very important factor in head-to-head fantasy leagues that is entirely driven by random chance. And because it is entirely driven by random chance, it should fully and completely regress to the mean across different samples.

That factor is schedule luck.

What is Schedule Luck?

Say you play in a 10-team league and have a very good squad. In one week, you post the second-highest score in the entire league. The most likely outcome is that you will win your matchup.

But that outcome isn't guaranteed. There are nine possible teams you can play (since you can't play yourself), and there are eight matchups that will result in you getting a win and one matchup that results in you getting a loss (if you're unfortunate enough to face off against the highest-scoring team). You could say that finishing with the second-highest weekly score should give you 8/9ths of a win in expectation. Or if you finished with the 2nd-highest score in a million different weeks, you should win 8/9ths of the time.

Some sites will track these odds in the form of "all-play records" (which is what your record would be if you played every team every week). And over a long enough timeline, your actual winning percentage and all-play winning percentage should converge.

But over short timelines, weird things can happen. I've seen a team finish with the 2nd-best score in three consecutive weeks and lose all three games. In expectation, that team "should" have won an average of 2.667 games. In actuality, that team won zero. They underperformed expectations by 2.667 wins.

Now, it's awful to lose wins in that manner. (And really nice to be on the other side of the coin and "gain" wins in terrible weeks thanks to schedule luck.) At the end of the year, when playoff seedings are decided, those "extra" wins and losses are a big deal.

But schedule luck is entirely luck, and you should expect underperformance and overperformance to be completely uncorrelated from one sample to the next. Just like sometimes NFL teams win a lot of coin tosses in a row, sometimes fantasy teams string together lucky wins or bad beats. But neither factor predicts anything going forward. In expectation, every team's actual win/loss record is most likely to line up with their all-play record going forward.

Now, while "schedule luck" is pure chance, "total wins" is a combination of underlying performance (an intrinsic factor) and schedule luck (chance). You might expect the winningest teams in your league to usually be the best teams. But success is paradoxical; even if the role of luck is very small relative to intrinsic factors, the more stringent your selection filters, the more luck will dominate.

This video gives an example. It looks at a recent class of astronaut applicants to NASA. Over 18,300 people applied, and from that 18,300, just 11 were selected. The video imagines that getting selected for astronaut training was 95% skill and just 5% luck and simulated the class selection process 1,000 times. It found that despite skill representing 95% of the selection process, selected applicants ranked in the 94th percentile in luck on average, and 9.5 out of the 11 selected applicants would not have been selected if NASA did not factor in luck (which, again, represented just 1/20th of the total score). All of the selected applicants were extraordinarily skilled, but to get from 18,000 to 11, skill alone is not enough. They also needed extraordinary luck.

What does this mean? If you showed me a list of the highest-scoring fantasy football teams in the entire country right now, I would guess that their net "luck" was zero-- because luck is totally random, and the expected outcome over any sample is always zero. But if you showed me a list of the WINNINGEST fantasy football teams in the entire country right now, I would guess that their skill was high, but their luck was also extremely high.

Now, it just so happens that a kind stranger has given me a massive data set to play with; the entire results of the Scott Fish Bowl to date.

For those not familiar with Scott Fish Bowl, it's an invitational tournament held every July for industry insiders and fans to compete against each other. This year it features 3,000 teams. And those teams play in 250 leagues with the same rules, and the top performers in each league through eleven weeks enter a giant tournament against the other winners.

Every week teams can get up to two wins or two losses. The first comes from the results of their head-to-head matchup, and the second is a "Victory Point"-- if a team finishes among the Top 6 weekly scorers, it gets a win, and if it finishes among the Bottom 6, it gets a loss. Through nine weeks, 261 teams have secured 14 or more wins. Those 261 teams average a whopping 1.46 extra wins thanks to schedule luck. 84 teams have gained at least two extra "schedule wins", while just 10 of the winningest teams have had negative luck.

Compare that to the 261 "best" teams. The 261 highest-scoring teams average just 0.25 "extra" wins from schedule luck. It's possible that the "victory point" format is very slightly skewing luck in favor of the best teams, and it's also possible that the sample just isn't quite large enough to expect this to go all the way to zero. Either way, "good" teams are not really any more or less likely to be lucky, but "winning" teams are massively more likely to have substantial luck on their side.

This concept applies to everything we see on a weekly basis. Tyreek Hill currently has 1104 receiving yards, the most by any receiver in history through nine games. Now, you don't set a record like that unless you're an incredibly talented player. But you also don't set a record like that unless you have substantial "luck" working in your favor, too. That's why we can bet against statistical leaders like that, because to top a leaderboard, being good is necessary, but it is not sufficient; you need to be lucky, too. Even if, as was the case with those astronaut candidates, luck only represented 5% of the variance in outcomes.

Anyway, back to the Scott Fish Bowl data, let's make a prediction. The regular season for the contest ends in two weeks. There are currently 35 teams with a winning record (10-8 or better) and an all-play percentage of 50% or worse. This is our Group A. On the other end, there are 269 teams with a losing record (8-10 or worse) and an all-play percentage greater than 50%. This is our Group B.

I don't know why the second sample is so much larger than the first; this may again be a function of the Victory Point screwing with our data. But I do know that Group A averages 10.3 wins and 7.7 losses (57.3%) despite an all-play winning percentage of just 48.5%, for an average of 1.57 "extra" wins through nine weeks. And Group B averages 7.5 wins and 10.5 losses (41.5%) despite an all-play winning percentage of 54.5%, for an average of 2.32 "fewer" wins through nine weeks. And since schedule luck is the purest form of luck, I predict that both groups should see no net luck over the next two weeks, with Group B finishing out the regular season with a higher winning percentage than Group A.

Photos provided by Imagn Images

More by Adam Harstad

 

Dynasty, in Theory: Do the Playoffs Matter?

Adam Harstad

Should we include playoff performances when evaluating players?

01/18/25 Read More
 

Odds and Ends: Divisional Round

Adam Harstad

Examining past trends to predict the future.

01/17/25 Read More
 

Odds and Ends: Wild Card Weekend

Adam Harstad

Examining the playoff futures and correctly predicting the Super Bowl winner.

01/10/25 Read More
 

Dynasty, in Theory: Evaluating Rookie Receivers

Adam Harstad

Revisiting this year's rookies through the lens of the model

01/09/25 Read More
 

Dynasty, in Theory: Consistency is a Myth

Adam Harstad

Some believe consistency helps you win. (It doesn't.)

01/04/25 Read More
 

Odds and Ends: Week 18

Adam Harstad

How did we do for the year? Surprisingly well!

01/02/25 Read More