# Superforecasting
**Philip Tetlock and Dan Gardner** | [[Prediction]]

---
> "You have to have tremendous humility in the face of the game because the game is extremely complex, you won't solve it, it's not like tic-tac-toe or checkers."
Most expert predictions are barely better than chance. Tetlock's landmark research proved this embarrassingly—pundits, analysts, and specialists performed about as well as dart-throwing chimps when forecasting political and economic events. But some forecasters consistently beat the odds, and this book explains how.
Superforecasting demands thinking that is open-minded, careful, curious, and—above all—self-critical. It also demands focus. The kind of thinking that produces superior judgment does not come effortlessly. Only the determined can deliver it reasonably consistently, which is why commitment to self-improvement is the strongest predictor of performance. Not intelligence. Not expertise. The willingness to keep getting better.
The distinction between humility in the face of the game and humility in the face of your opponents matters. The game is complex; you won't solve it. But that doesn't mean you can't outperform others who aren't trying as hard or as systematically.
---
## Core Ideas
### [[The Superforecaster Method]]
Superforecasters tackle questions in a roughly similar way—one that any of us can follow:
**Unpack the question into components.** Distinguish as sharply as you can between the known and unknown. Leave no assumptions unscrutinised.
**Adopt the outside view first.** Put the problem into a comparative perspective that downplays its uniqueness. Treat it as a special case of a wider class of phenomena. What's the base rate for this type of event?
**Then adopt the inside view.** Play up the uniqueness of the problem. What specific factors make this case different from the reference class?
**Explore differences between your views and others'.** Pay special attention to prediction markets and other methods of extracting wisdom from crowds. Where do you disagree, and why?
**Synthesize into a single vision.** Combine these perspectives as acutely as a dragonfly's compound eye—many lenses, one image.
**Express your judgment precisely.** Use a finely grained scale of probability, not vague terms like "likely" or "possible."
### [[Tacit Knowledge]]
Reading books on forecasting is no substitute for the experience of the real thing. There's a physics formula for riding a bicycle—you adjust the curvature of your path in proportion to the ratio of your unbalance over the square of your speed. But knowing that formula won't keep you upright. You need tacit knowledge, the sort we only get from bruising experience.
Learning to forecast requires trying to forecast. Making predictions, tracking them, seeing where you went wrong, and updating your approach. The hard work of research, the careful thought and self-criticism, the gathering and synthesizing of other perspectives, the granular judgments and relentless updating—there's no shortcut.
### [[When Intuition Works]]
Not all intuition is equally valid. Kahneman and Klein identified when we can trust expert intuition and when we can't.
We have reason to trust the intuitions of an experienced fireground commander about building stability, or a nurse's intuition about an infant's health. These are domains with clear feedback, stable patterns, and opportunities to learn.
We have less reason to trust the intuitions of a stockbroker. If publicly available information could predict stock performance, the price would already reflect it. This is a domain where expertise doesn't reliably translate into forecasting ability.
The question to ask: does this domain offer valid cues and regular feedback that allow genuine pattern learning? Or is it a "wicked" environment where feedback is delayed, noisy, or absent?
### [[Auftragstaktik]]
"Never tell people how to do things. Tell them what to do, and they will surprise you with their ingenuity." This is Auftragstaktik—mission-type tactics from German military doctrine. Commanders tell subordinates what their goal is but not how to achieve it.
This matters for forecasting because no plan survives contact with the enemy, and nothing is certain. Rigid procedures break down when reality diverges from expectations. Superforecasters need the flexibility to update their methods as they learn, not slavish adherence to a fixed protocol.
---
## Key Insights
**Commitment to self-improvement is the strongest predictor of forecasting performance.** Not raw intelligence, not domain expertise, not access to information. The willingness to track your predictions, identify errors, and systematically improve. This is a skill, and skills can be developed.
**Universal agreement is a warning flag.** A smart executive will not expect universal agreement, and will treat its appearance as a sign that groupthink has taken hold. An array of judgments is welcome proof that people are actually thinking for themselves and offering unique perspectives. If everyone agrees, something has gone wrong.
**Calibration matters more than confidence.** Superforecasters express probabilities precisely—not "likely" but "73%." This granularity forces clear thinking and enables feedback. If you said something had a 73% chance and it happened, you can track whether your 73% calls come true about 73% of the time. Vague language ("probably," "could happen") makes learning impossible.
**The outside view corrects for narrative seduction.** We naturally focus on the specific details of a situation, constructing compelling stories about why this case is different. The outside view asks: what usually happens in situations like this? What's the base rate? This perspective check often reveals that our "unique" situation is not so unique after all.
---
## Connects To
- [[Everything Is Predictable]] - Bayesian reasoning as the mathematical foundation for updating beliefs; Tetlock's superforecasters are intuitive Bayesians
- [[Black Box Thinking]] - Both emphasise learning from errors and systematic self-correction; you can't improve what you don't measure
- [[Algorithms to Live By]] - Explore/exploit tradeoffs; the outside view is a form of prior, the inside view is updating on new evidence
- [[Antifragile]] - Taleb would argue some domains are fundamentally unpredictable; Tetlock shows that even in uncertain domains, some predictors are better than others
- [[The Fifth Discipline]] - Mental models shape what we see; superforecasters deliberately examine and update their mental models
---
## Final Thought
The core insight is uncomfortable: expertise doesn't automatically confer forecasting ability. Domain knowledge helps, but it's not sufficient. What separates superforecasters from the rest is method and mindset—the discipline to unpack questions, seek disconfirming evidence, update beliefs incrementally, and track results honestly.
This is learnable. You don't need to be a genius. You need to be humble about the complexity of the game whilst still believing you can improve through effort. You need to make predictions, express them precisely, track them, and learn from your mistakes. The hard work of research, the careful thought and self-criticism, the relentless updating—that's the path.
No plan survives contact with the enemy. Nothing is certain. But within that uncertainty, systematic effort beats dart-throwing chimps. The question is whether you're willing to put in the work.