# Systemantics
**John Gall** | [[Prediction]]

---
> "A COMPLEX SYSTEM THAT WORKS IS INVARIABLY FOUND TO HAVE EVOLVED FROM A SIMPLE SYSTEM THAT WORKED. A COMPLEX SYSTEM DESIGNED FROM SCRATCH NEVER WORKS AND CANNOT BE MADE TO WORK. YOU HAVE TO START OVER, BEGINNING WITH A WORKING SIMPLE SYSTEM."
Most management thinking assumes systems can be designed, controlled, and optimised to perform as intended. Systemantics demolishes this fantasy. Systems develop their own goals the instant they come into being. They operate in failure mode most of the time. They don't work for you—they work for themselves.
This isn't pessimism. It's realism that makes you effective. Once you accept that systems resist control, you stop trying to force them and start working with their actual behaviour. You design for failure mode, not ideal conditions. You evolve systems from simple working versions rather than blueprinting complexity. You recognise that your job isn't to make systems obey—it's to remove obstacles and allow useful things to happen.
---
## Core Ideas
### [[New Systems Mean New Problems]]
Every system, once created, doesn't go away. Systems are like babies—you're stuck with them. The Fundamental Theorem is brutally simple: new systems mean new problems. You never get a clean solution. You get a new configuration of difficulties.
**Complex systems exhibit unexpected behaviour.** Scaling up a working system changes everything. A large system produced by expanding a smaller system does not behave like the smaller system. The assumption that "more of the same" preserves function is wrong every time.
### [[Systems Develop Their Own Goals]]
The moment a system comes into being, it develops goals of its own. Those goals are rarely aligned with what you wanted. Systems don't work for you or for me. They work for their own goals.
The System Takes The Credit for any favourable eventuality. The public school system claims responsibility for great writers because it taught them to write. The NIH claims credit for biomedical advances because it funded the research. Meanwhile, the system opposes its own proper function—systems get in the way, they kick back, they tend to oppose what they're supposedly designed to do.
### [[Failure Mode Is The Default]]
Any large system is going to be operating most of the time in failure mode. The textbook description of how a system works when everything is perfect is irrelevant. The pertinent question is: how does it work when its components aren't working well? How does it fail? How well does it work in failure mode?
Error correction is what we do. Not heroic optimisation. Not brilliant design. Just endless, unglamorous correction of things going wrong. As systems grow in size and complexity, they tend to lose basic functions. Colossal systems foster colossal errors, which tend to escape notice or even be excused when spotted.
### [[Information Decays]]
Information decays over time. Worse: the most urgently needed information decays fastest. By the time feedback reaches you, it's describing the past, not the present.
The Inaccessibility Theorem captures the frustration: the information you have is not the information you want. The information you want is not the information you need. The information you need is not the information you can obtain.
In a closed system, information tends to decrease and hallucination tends to increase. A poorly-functioning system generates exponentially more messages as it sinks deeper into the morass of unfinished tasks. Eventually the non-functioning system is completely occupied with its own internal communication processes.
---
## Key Insights
**You can't change just one thing.** Every intervention ripples through the system in unexpected ways. The twin Limit Theorems bracket the problem: you can't change just one thing, but you can't change everything either. Pragmatically, aim at changing one or a few things at a time and then work out the unexpected effects.
**The System Ignores Feedback at its peril.** A system that ignores feedback has already begun the process of terminal instability. It will eventually be shaken to pieces by repeated violent contact with the environment it's trying to ignore. But feedback has intrinsic limitations—it always gives a picture of the past, never the future.
**To be effective, an intervention must introduce change at the correct logical level.** Changing actors doesn't improve the dialogue of a play. Punishing actors is equally ineffective. Control lies at the level of the script, not the actors. If your problem seems unsolvable, consider that you may have a meta-problem.
**Systems attract systems-people.** Specialised systems select for specialisation. The end result of extreme competition is bizarreness. Designers of systems tend to design ways for themselves to bypass the system—a reliable indicator that the system doesn't actually work as advertised.
**Loose systems last longer and function better.** Build systems too tight or wound up too tightly and they'll either seize up, peter out, or fly apart. Work with human tendencies rather than against them. Go with the flow. The Vector Theory of Systems: systems run best when designed to run downhill. Avoid uphill configurations.
**Big systems either work on their own or they don't. If they don't, you can't make them.** Nothing is more useless than struggling against a Law of Nature. In complex systems, malfunction and even total non-function may not be detectable for long periods, if ever.
**Catalytic Managership removes obstacles rather than making things happen.** Trying to make something happen is too ambitious and usually fails. But removing obstacles in the way of something happening often allows a great deal to occur with little effort on the part of the manager, who nevertheless gets a large part of the credit.
**Great advances do not come out of systems designed to produce great advances.** Solutions usually come from people who see in the problem only an interesting puzzle, and whose qualifications would never satisfy a select committee. Major advances take place by fits and starts. Complicated systems produce complicated responses to problems.
---
## Connects To
- [[The Fifth Discipline]] - another systems thinking lens, but Systemantics is far more cynical about control
- [[Antifragile]] - both recognise that systems must be designed for disorder, not ideal conditions
- [[Black Box Thinking]] - where Black Box focuses on learning from failure, Systemantics assumes failure is the default state
- [[The Unaccountability Machine]] - explores the paradox of control in bureaucratic systems
- [[Requisite Organization]] - attempts to design organisations that work; Systemantics suggests this is hubristic
- [[Making Sense of Chaos]] - complexity economics assumes systems can't be controlled, only understood
---
## Final Thought
The System Continues To Do Its Thing, Regardless of Circumstances. You can't force it. You can't redesign it wholesale. You can't optimise it into obedience.
What you can do: evolve simple systems that work into more complex ones. Design for failure mode, not ideal conditions. Remove obstacles rather than mandate outcomes. Accept that error correction is the job, not heroic control.
The reframe is profound. You're not a designer imposing order. You're a gardener removing weeds and hoping something useful grows.