How not to run a village in the Sahel
A review of Dietrich Dörner, The Logic Of Failure: Recognizing And Avoiding Error In Complex Situations (UK) [Via the Diff’s reading list.]
‘Between a good and a bad economist this constitutes the whole difference — the one takes account of the visible effect; the other takes account both of the effects which are seen, and also of those which it is necessary to foresee.’ Bastiat, 1850.
Your leadership talent is finally recognised and you are tasked with running a village in the Sahel. How can you improve the life of your villagers? You decide to drill some more wells to support more cattle and crops, improve the medical care available, and get rid of a disease-spreading fly. You’re off to a good start and prosperity soon increases.
Years pass, and things don’t go quite as planned. The wells are starting to run dry — you’ve exhausted the groundwater. The cattle are now overgrazing the remaining grass, pulling it up by the roots and accelerating the decline in available farmland. The population is increasing as the fly is gone and medical care has improved, but it’s increasing enormously as there was no birth control, and now the agricultural economy is overmanned even as it begins to decline precipitously.
Soon there’s a famine, and your village falls to a development level lower than what you started with. What went wrong?
Such situations and errors are the premise of Dietrich Dörner’s The Logic of Failure. People are typically quite bad at making decisions in what he dubs ‘complex situations’ — i.e. systems with interconnected variables. The book’s chief examples are various computer simulations of fictional Sahel villages or towns somewhere in the West, representing agrarian economies and developed economies respectively.
Dörner uses the results of several experiments to teach us lessons about complex situations. Individuals play computer simulations of these situations usually with poor results. In the African village, this usually involves precipitating ecological collapse in a manner similar to that of my opening example. (Relatedly, Robert Caro’s first Lyndon Johnson book details how the lush Texas Hill Country looked like a great bet for farmers, but clearing the flora meant that the soil soon lost its quality, and farming there turned into a nightmare.)
As Byrne Hobart notes, an interesting feature of the experiments is that initial problems often led participants to make increasingly wild decisions, and later even accusations that the simulation is rigged against them. At other times, they end up in a kind of coping strategy, focusing on a small number of factors that they feel better about interacting with e.g. the person who was a social worker in their day job ended up focusing on a single child in the simulation, instead of working on the wider situation in the town.
In a few rare cases, participants are passable or even good at running the settlement, and Dörner attempts to extract helpful tips for the rest of us. His big three lessons are common sense; thinking in ‘temporal configurations’ i.e. being aware of time lags (which includes anticipating events and trends, such as the exponential growth of an epidemic); and thinking in systems i.e. being aware of interconnected factors and side effects.
His suggested approach for complex situations is:
Define goals
Develop a (mental) model for the situation and gather information
Extrapolate and analyse
Decide upon a course of action, and do it
You should hop back through these steps as needed; you’ll get things wrong at the start and need to update your mental model, or even your fundamental goals (you may not have realised, for example, that ‘not running out of groundwater’ was a goal you should have kept in mind back in the Sahel — sometimes it pays to think about problems you don’t have yet, too). You might realise your chosen policy is ineffectual and, in the possession of new information, opt for an alternative. You might have forgotten the difference between ‘urgent’ and ‘important’, or have realised with experience that your complex goal (e.g. well-being of citizens) needs to be broken down into partial goals (e.g. better salaries, better roads).
Many of the simulations involve time lags (just like in real life), so it really pays to wait to see what the results are: if it takes 5 minutes for the heating system you’ve just tweaked to actually affect the temperature in your flat, you should wait to find out what temperature is achieved before ratcheting it up to the top and creating an oven. Measuring the outcome in complex situations is important, otherwise you won’t know if you’ve made the right decision, and won’t be able to make an informed decision on whether to maintain or repeat your policy. You also need to pay attention to future trends — will that graph keep going up forever? Accelerate? Decelerate? E.g. what will happen when my village population keeps increasing?
Dörner’s recommendation of setting (and updating) goals reminded me of Ben Thompson’s Principle Stack (here and here). You make a principle stack by deciding what you care about (e.g. balanced town budgets, wages, air pollution) and where those things rank in comparison to each other. When one comes into conflict with the other, you pick the one that’s higher up (or reorder them to reflect your preferences). Doing the same thing for your goals can help guard against over-planning amidst uncertainty, as you don’t have to rethink everything all the time. It can also help you to delegate tasks to subordinates, as they will know what you care about.
A central conclusion of the book is that over-generalising can be extremely unhelpful. It’s easy to apply a model from elsewhere to a new problem, without considering whether the conditions that underpin the model really exist in this new situation. This is the intellectual’s version of ‘Seeing like a State’ — instead of lots of novel, unique situations, there are only situations in which a simplified method or model from the past that I happen to know is appropriate. In fact, the efficacy of methods from the past can blind us to their weak spots and make us over-confident in them: ‘[T]he rules are local — they are to a large extent dictated by specific circumstances.’
Every generalist wants to compare new situations to old ones; Trump is the new Nixon/Reagan/Jackson; or Apple, Facebook, Microsoft and Google are the internet’s version of the big car manufacturers; Covid-19 is the new SARS. Dörner wants readers to dig down into the details to ascertain whether that’s really a fair thing to say.
This is obvious when you think about it. Undeniable, even. But it’s also easy to believe that people are lazy and use analogies a bit too freely, so hence the advice is needed. At the margin, we overgeneralise, so it’s best to push in the opposite direction (except when I generalise on my blog, obviously.)
At another point, Dörner argues that we can see failure coming from the start when we look back at people’s thought processes. But isn’t there a danger we think everything’s inevitable when we look back like that? As Peter Thiel pointed out:
Most businesses fail for more than one reason. So when a business fails, you often don’t learn anything at all because the failure was over-determined. You will think it failed for Reason 1, but it failed for Reasons 1 through 5. And so the next business you start will fail for Reason 2, and then for 3 and so on.”
So is Dörner overdetermining failure by looking at earlier decisions so much? Or is Thiel barking up the wrong tree, and failure is always evident from the start?
Dörner’s experiments are about making good decisions in a simulated environment — they’re all about thought processes. Businesses, though, are about thought processes, and a load of other contingent things, some of which are beyond your control. When it comes to picking start-up founders, former YC head Sam Altman is really confident that you can sift good people from bad with great accuracy after a short interview — in which, presumably the main thing you get to examine is their thinking. So Dorner is right that you can detect bad thinking from the start, and that matters. It’s just that other things matter in business, too.
Despite the common-sense, yet incisive observations Dörner makes, scepticism of psychology studies from the 1980s is probably warranted in light of the replication crisis. Sometimes the stuff he’s measuring is somewhat nebulous, e.g. can you really measure how ‘specific’ a participant’s goal was and turn that into a number?
Elsewhere, Dörner is also critical of what he describes as ‘ritual’. One experiment involved people selecting numbers on a regulator in order to find the right temperature for a storeroom — but there was a five-minute lag before the temperature caught up with the regulator. Some participants ended up saying nonsensical things like: ‘Twenty-eight is a good number’ or ‘Multiples of ten should be avoided.’ They overgeneralised based on a single experience: selecting a certain number moved the temperature in the desired direction, they think — except the time lag meant that the temperature was soon off again.
When outcomes conflicted with these invented hypotheses, some participants simply added conditions: ‘oh, it’s increments of ten’; ‘you have to do XYZ in series to get it to the right temperature’, or whatever. Dorner claims that this problem will only be worse in the real world, when situations are even more complex, and time lags are even longer.
But how does he explain how people learned to eat manioc? Or any other useful human behaviour that was developed despite long time lags? Innovation and rationality used to get you killed, and rituals are a good way to pass on some knowledge without it being fully explainable to modern minds. Perhaps not all the complex situations we’ll find ourselves in can be rationalised the way Dörner would like.
That said, time lags are pretty important when it comes to epidemics. Dörner explains exponential growth using the example of AIDS, but in 2021 there’s a particularly recent example of exponential growth that springs to mind. Time lags are very important in an epidemic:
https://twitter.com/ByrneHobart/status/1242157502118100995?s=20
Take into context all you know about the current spread of a disease — including its incubation period and its likely ceiling — before you credit your interventions with slowing it. Or, as Dörner puts it: ‘…we cannot interpret numbers based solely on their size. To understand what they mean, we have to take into account the process that produced them, and that is not always easy.’
It’s important to think in systems, but that doesn’t solve all your problems. I read this book the same week that Dominic Cummings testified to MPs about the UK government’s response to the coronavirus pandemic last year. In March 2020, they were hesitant to impose a national lockdown because they thought that people would not obey it — a great example of systems thinking! They recognised their actions would have an effect on another part of the system, and so acted accordingly — but their assumption was later proved wrong. When they eventually locked down, people did obey it.
Cummings said that he and others within the government realised in early March that this plan (i.e. no lockdown, let the virus rip) wouldn’t work — without a lockdown, the virus would spread extremely rapidly, the NHS would be overwhelmed, people would die in hospital corridors, and there would be the effects of a lockdown regardless of any official orders because people would stay at home out of fear. This is also systems thinking, but a deeper and thus smarter version of it.
Dörner writes at the end of his book that he has no single mode of thought to promote as failsafe or perfect. Instead, he wants us to use computers to train ourselves to perform in complex situations, as they allow us to repeat challenges and learn from our mistakes. this is pretty forward thinking for 1989, and is reminiscent of Tyler Cowen’s Average is Over (2014), which suggests that it is a ‘computer plus a person’ which will ultimately stay ahead of computers, at least for the next while.
Systemic thinking, Dörner concludes, is ‘a bundle of capabilities, and at the heart of it is the ability to apply our normal thought processes, our common sense, to the circumstances of a given situation.’ This conclusion is at once both profound and almost banal, but most of all it’s underrated by individuals and institutions today.
If you want me to write more things like this, you should subscribe to my weekly email, which includes links to what I write and a few extra interesting things.
Comments welcome below.