Dynasty, in Theory: First Principles and Toy Models

A lot of problems can be solved by thinking carefully, but a little bit of data helps, too.

Adam Harstad's Dynasty, in Theory: First Principles and Toy Models Adam Harstad Published 09/14/2024

There's a lot of strong dynasty analysis out there, especially when compared to five or ten years ago. But most of it is so dang practical-- Player X is undervalued, Player Y's workload is troubling, the market at this position is irrational, and take this specific action to win your league. Dynasty, in Theory is meant as a corrective, offering insights and takeaways into the strategic and structural nature of the game that might not lead to an immediate benefit but which should help us become better players over time.

The Power of First Principles

The exact forms ebb and flow-- analytics, machine learning, "AI" and Large Language Models (or LLMs)-- but data-driven investigations are ascendent across society. Fantasy football is no exception; there's no shortage of new studies coming out or new tools cropping up that leverage huge troves of data from tens of thousands of leagues to reach new insights.

As the name of this column might suggest, I tend to prefer the classics-- solving problems and reaching insights just by thinking about things for a bit. (If I want to dress it up, I'll call it "reasoning from first principles", but it amounts to the same thing.) I can conduct a study or build a model if I have to, but I'd rather skip all that messy data collection and processing if possible.

Done well, reasoning from first principles can be incredibly powerful, especially for addressing questions that aren't amenable to data-driven analysis. 

For instance, in 2013, I summarized existing studies on "beating" the NFL draft. They found that 1st round picks were more likely to become stars than 4th round picks, and occasionally a team would string together several successful picks in quick succession, but over long timelines, no team drafted better than any other team, suggesting any short-term deviations were driven by chance alone.

I compared this to the Efficient Market Hypothesis, which states that assets are priced based on all available information and suggests any future movement will only be the result of new information. One implication of this is that unless you have information that the market doesn't possess, it's impossible to "beat the market" in the long run.

But then I considered some of the differences between the NFL draft and the stock market. In the stock market, if an asset is overpriced, there is a mechanism whereby outsiders can push that price down (through e.g. shorting that stock). The draft lacks any such mechanism-- if the Las Vegas Raiders want to spend the #3 pick on a questionable receiver who ran a 4.3 40-yard dash, there's nothing the New England Patriots can do to make that receiver fall further.

I suggested both a "strong form" and a "weak form" of the efficient market hypothesis for the NFL draft. The "strong form" claimed that a player's draft position is fully reflective of his NFL prospects, given all available information at the time, and players are equally likely to underperform or overperform that price. The "weak form" said that draft position is fully reflective of NFL prospects given all available information for players who were drafted at or below the consensus but was not reflected for "reaches" (or players drafted earlier than they "should have" been).

And then I wrote: "I tend to favor the strong-form version of the hypothesis, though, out of sheer practicality; it is impossible to know what the actual NFL consensus really was."

That was correct... when I wrote it. In 2014, a journalist named Arif Hasan started compiling a "consensus big board", which aggregated all of the predraft rankings from every pundit and analyst he could find. The project grew over the years and by 2024 included more than 100 draft boards.

In 2021, a study on Hasan's consensus big board found that it did, indeed, predict NFL draft position. With that established, a subsequent study looked at career outcomes of players who were deemed "reaches" or "steals" according to the consensus big board.

The second study found that "steals" (or players who fell past their projected draft position) did not perform any better, on average, than "non-steal" players drafted in the same range. On the other hand, "reaches" (or players drafted significantly earlier than projected) did underperform their draft position-- though they outperformed the original projection. (On average, they were worth about as much as a player picked halfway between where they were drafted and where they were projected.)

That second study fully validated the weak form hypothesis postulated from first principles nearly a decade prior.

Already a subscriber?

Continue reading this content with a ELITE subscription.

An ELITE subscription is required to access content for Dynasty leagues. If this league is not a Dynasty league, you can edit your leagues here.

The Limits of First Principles

I've seen numerous examples like this throughout my career-- cases where simply thinking carefully about a problem produced answers as good as (or better than) rigorously studying it. Better, pure theory allows us to sometimes go where there is no data to be had.

But as I wrote to end last season, all of the theory in the world doesn't matter if it doesn't match reality. It's heartening that data later validated an old theory, but this only underlines the importance of those later studies.

On that note, one of my favorite pastimes in fantasy is creating various "first principles" approaches to estimating the value of the regular season vs. the playoffs. I lacked the data to study the question directly, but I developed all sorts of toy models I have used to that effect.

Here's one: Imagine a 12-team dice-rolling league. Every week, each team plays a head-to-head match against another squad, rolling 20 6-sided dice and scoring the sum of their rolls. At the end of the regular season, the top six teams make a single-elimination playoff, with the top two earning byes.

In this league, every team has a 50% chance of winning every game and an equal 1-in-12 chance of winning the league (about 8.3%). Imagine now that a genie appears to one participant and allows that player to select one week of the year. For that week only, every die he or she rolls will come up a 6, guaranteeing a win.

If this player accepts the genie's offer during the regular season, what are their odds of winning the title? Those odds certainly rise-- the team is marginally more likely to make the playoffs and earn a bye, though they still only have a 50/50 shot of advancing each week once there. It's hard to guess the exact impact, but maybe the chances become 1-2% better.

What happens if, instead, this player accepts the genie's offer for the week of the championship game? 10 out of 12 teams will not reach the championship, so there's a 5-in-6 chance that they will be eliminated before this point and the offer goes to waste. But on the 1-in-6 chance they make it this far, they are guaranteed a victory, so their title odds become 1-in-6-- double their original chances. This represents a more than 8% increase, which is 6-7% larger than the estimated benefit during the regular season.

Repeating the process for the semifinals reveals that week would also double the overall title chances. The math is a bit stickier, but accepting the genie's offer during the wildcard weekend increases a team's chances from 1/12 to 1/8, about a 4% boost, 2-3% larger than that from the regular season.

The results of these toy models suggest important facts about fantasy football; a player who will miss the first month of the season might miss 25% of the possible games he could appear in for the year, but this doesn't mean he will lose 25% of his total value in the process.

That's well and good in theory, but how does it work in practice? In recent years, studies on average playoff rates by record and win percentage by team quality have given me the data I need to model these scenarios directly. So I have done so.

A team with a 50% expected win rate in every game (much like the dice-rolling team from above) has an 8.3% chance of winning the title. Setting the win rate to 100% in either the semifinals or the finals results in a 16.6% chance of winning, exactly as expected. Setting the win rate to 100% in the wildcard round boosts championship odds to 12.5%. This again exactly matches what we expected (which was no surprise, since all of these original values could be calculated precisely).

What about the one value we had to guess at? Giving our team the guaranteed win in the regular season resulted in an expected championship win rate of 10.35%-- a 2% boost which falls quite nicely in our original predicted range. But, notably, this falls at the very top of our predicted range; according to the model, a single regular season week is not nearly as low-value as I thought it might be.

I didn't just create this model to test my intuitions (although periodically testing one's intuitions is important). I built it because theory is powerful and when done well it's almost always directionally correct, but it is rarely precise. For precision, you need data. We can guess that "reaches" are less valuable than their draft position and more valuable than their projected draft position, but we need data to determine just how much more or less.

Now that I can model season outcomes directly, we can look at questions like "How much does a win-now trade impact your championship odds?" or "What sorts of teams benefit most from these trades?" or "When does it make sense to concede a season early and start rebuilding in earnest?"

Next week, we'll use it to do just that. In the meantime, I wanted to provide an opportunity for my readers to test their own intuitions. If you'd like, carefully consider each of the following questions and predict what you think the answer will be.

  1. With our assumption that we had a 50% chance to win each game, taking the genie's offer in the wildcard round increased our title odds by 2.2% more than taking it in the regular season while taking it in the finals was 6.3% better. If our default chances of winning each game were 70% instead, would the wildcard round become more advantageous, less advantageous, or remain the same? How about the finals?
  2. What if there were no bye weeks and only four teams made the playoffs? Would the finals week become more or less valuable compared to a random regular season week, or would its value remain the same?
  3. If the genie offered to guarantee a victory in two regular-season weeks, would that be twice as valuable?
  4. How many regular-season victories would he have to guarantee for it to become more valuable than a guaranteed high score during the championship week?
  5. Is it better to improve your chances of winning every game by 5% if you're starting with a 50% chance to win or if you're starting with a 60% chance to win? At what starting winning percentage would an extra 5% chance be most impactful?
  6. Who is more likely to win a championship, a team with a 50% chance to win every game or a team that alternates between a 30% chance to win and a 70% chance to win?

If you don't want to wait until next week, I've also made a spreadsheet available here; simply save a copy and you can alter the assumptions as you see fit to test the impact of various scenarios.

 

Photos provided by Imagn Images

More by Adam Harstad

 

Dynasty, in Theory: Do the Playoffs Matter?

Adam Harstad

Should we include playoff performances when evaluating players?

01/18/25 Read More
 

Odds and Ends: Divisional Round

Adam Harstad

Examining past trends to predict the future.

01/17/25 Read More
 

Odds and Ends: Wild Card Weekend

Adam Harstad

Examining the playoff futures and correctly predicting the Super Bowl winner.

01/10/25 Read More
 

Dynasty, in Theory: Evaluating Rookie Receivers

Adam Harstad

Revisiting this year's rookies through the lens of the model

01/09/25 Read More
 

Dynasty, in Theory: Consistency is a Myth

Adam Harstad

Some believe consistency helps you win. (It doesn't.)

01/04/25 Read More
 

Odds and Ends: Week 18

Adam Harstad

How did we do for the year? Surprisingly well!

01/02/25 Read More