# Everything Is Predictable
**Tom Chivers** | [[Numbers]]

---
> "All decision-making under uncertainty is Bayesian—or to put it more accurately, Bayes' theorem represents ideal decision-making, and the extent to which an agent is obeying Bayes is the extent to which it's making good decisions."
Not "Bayes is useful" or "Bayes is elegant." Bayes is *optimal*. If you're making decisions under uncertainty (and you are), then the degree to which you're Bayesian is the degree to which you're rational.
**All inference requires priors**—beliefs you hold before seeing new evidence. You can't escape this. Pretending to be "objective" by refusing to state priors doesn't make you more rigorous; it makes inference impossible. Bayesian thinking forces you to be explicit about what you believe and how much new evidence should change that belief.
The book also exposes the limits of frequentist statistics—p-values, significance tests, the machinery of academic research. These tools answer the wrong question. They tell you *how likely you are to see this data, given a hypothesis*. But what you actually want to know is *how likely the hypothesis is, given the data*. Only Bayes answers that.
---
## Core Frameworks
### [[Bayes' Theorem]] – Updating beliefs in light of evidence
Bayes' theorem tells you **how much to change your belief** when presented with new data. But to use it, you need:
1. **A prior**: your belief about how likely a hypothesis was *before* seeing new evidence.
2. **The likelihood**: how likely the new evidence is, assuming the hypothesis is true.
3. **The posterior**: your updated belief *after* seeing the evidence.
The formula formalises what rational belief revision looks like. It's not magic—it's just consistency.
### [[Priors]] – Subjective beliefs as a feature, not a bug
A **prior probability** captures how likely you thought a hypothesis was before new evidence appeared. Priors can be subjective. That's not a problem—it's unavoidable. Inference without priors is impossible.
Different people with different priors can reasonably interpret the same evidence differently, but **beliefs should converge as evidence accumulates**.
**Occam's razor** provides a rule of thumb: simpler explanations get higher priors, because complex outcomes are less likely to arise by chance.
### [[Bayesian vs Frequentist]] – Two philosophies of probability
**Bayesian** asks: *How likely is the hypothesis, given the data?* Probability = subjective belief. Requires explicit priors. Direct inference about hypotheses.
**Frequentist** asks: *How likely is the data, given the hypothesis?* Probability = objective frequency in repeated trials. Avoids priors (but smuggles them in via methodology). Indirect inference via significance tests.
### [[P-values]] – Misunderstood and misused
A **p-value** measures how unusual the data is *under a null hypothesis*, not how likely the hypothesis is to be true. P-values answer: "If the hypothesis were true, how often would I see data this extreme?" They don't tell you: "How likely is my hypothesis to be true?"
The danger: researchers treat "statistically significant" as "confirmed," when it means nothing of the sort.
---
## Key Insights
**Probability is not an objective property of the world.** It reflects our **ignorance and uncertainty** about it. Bayesianism treats probability as **a statement about what we don't know**, not what objectively exists. This framing is honest: it acknowledges that different people, with different information and priors, will reasonably disagree.
**Inference (Bayesian) works backwards from evidence to evaluate hypotheses.** "Given this data, how likely is the hypothesis?" **Sampling (frequentist) works forwards from hypothesis to predict data.** "Given this hypothesis, how often would I see this data?" Most real-world questions are inference questions. We observe data and want to know what caused it. Frequentist methods sidestep this by answering a different question.
**You can't avoid priors.** The question is whether you make them **explicit** (Bayesian) or **implicit** (frequentist). Frequentist methods smuggle in priors through methodological choices: which tests to run, how to define significance, when to stop collecting data. Making priors explicit forces discipline: you have to justify them, and you have to show how evidence changes them.
**If two people start with different priors, they'll interpret evidence differently.** But **as evidence accumulates, beliefs converge**. This is the Bayesian defence against the "subjectivity problem": yes, priors are subjective, but posteriors are constrained by data.
**Variance** is how far data points spread around the mean. **Standard deviation** is square root of variance—a more interpretable measure of spread. These measures highlight uncertainty rather than eliminating it, reinforcing the Bayesian view that **probability quantifies what we don't know**.
**"Precise estimates, high certainty, or small samples. Pick two."** You can't have it all. Data analysis always involves trade-offs between accuracy, confidence, and feasibility. This is another way of saying: **uncertainty is irreducible**. We manage it; we don't eliminate it.
---
## Connects To
- [[Prediction Machines]] – Agrawal, Gans, and Goldfarb on prediction complements Chivers on inference
- [[Algorithms to Live By]] – Brian Christian on computational thinking and Bayesian reasoning
- [[Black Box Thinking]] – Matthew Syed on learning from failure, which requires honest updating of priors
- [[How Finance Works]] – Mihir Desai on probability and expected value in capital allocation
---
## Final Thought
This book forces you to **confront your priors**. That's uncomfortable. Most of us prefer to think we're "just following the data" or "letting the evidence speak for itself." But that's a fiction. Every inference starts with beliefs—about base rates, about how the world works, about what's plausible. Bayesian thinking makes those beliefs explicit and shows you how to update them.
**Inference without priors is impossible.** Frequentist statistics tries to avoid stating priors, but it just hides them in methodological choices. Bayesianism is honest: it says, "Here's what I believed before. Here's the new evidence. Here's how much my belief should change."
**The distinction between inference and sampling** is what I keep coming back to. Most real-world questions are inference questions. We see data and want to know what caused it. "Given this sales decline, how likely is it that the market is shrinking vs we're losing share?" "Given this test result, how likely is it that I have the disease?" Frequentist methods don't answer these questions directly—they answer a different question (how often would we see this data if the hypothesis were true?) and hope you can work backwards.
**Probability as subjective knowledge** isn't a weakness—it's a feature. Probability quantifies *what we don't know*, not what objectively exists. Two people with different information should have different probabilities for the same event. That's rational, not relativist. And as evidence accumulates, Bayesian updating ensures beliefs converge.
**Make your priors explicit.** When you're making a forecast, placing a bet, or evaluating a hypothesis, start by asking: what did I believe before seeing this evidence? Then ask: how much should this evidence change my belief? That's Bayesian thinking. And it's the only coherent way to reason under uncertainty.