Twelve deep thinkers over at The Edge have a series on risk after the Fukushima disaster. I won’t try and reproduce the complexity and subtlety of their arguments, but risk and risk management are at the heart of what the Prevail Project is about. How can we think about risk in a domain of technological uncertainty? What does risk actually mean?
Risk is modern concept, compared with the ancient and universal idea of danger. Dangers are immediate and apparent; a fire, a cougar, angering the spirits. Risk is danger that has been tamed by statistics; this heater has a 0.001% of igniting over the course of its lifespan, there are cougars in the woods, and so on. Risk owes its origins to the insurance industry, and Lloyd’s of London, which was founded to protect merchant-bankers against the dangers of sea travel. While any individual ship might sink, on average, most ships would complete their voyages, so investors could band together to prevent a run of bad luck from impoverishing any single member of the group.
This kind of risk is simple and easy to understand. It is what mathematicians refer to as linear: a change in the inputs, like the season, correlates directly to an outcome, like the number of storms, and the number of ship sunk. The problem is that this idea of risk has been expanded to cover complex systems, with many inter-related parts. As complexity goes up, comprehensibility goes down, and risks expand in complicated ways. Modern society is “tightly coupled”, a concept developed by Charles Perrow in his book Normal Accidents. Parts are linked in non-obvious ways by technology, ecology, culture, and economics, and failure in a single component can rapidly propagate through the system.
The 2007 financial crisis is a perfect example of a normal accident caused by tight coupling. Financiers realized that while housing prices fluctuate, they are usually stable on a national basis, and so developed collateralized debt obligations based on ‘slices’ of the housing market nation-wide, which were rated at highly secure investments. When the housing bubble collapsed, an event not accounted for in their models, trillions of dollars in investments lost all certain value. Paralysis spread throughout the financial system, lead to a major recession. While this potted history is certainly incomplete, normal accidents are the defining feature of the times. The 2009 Gulf of Mexico oil spill, and the Fukushima meltdown are both due to events which were not accounted for in statistical models of risk, but which in hindsight appear inevitable over a long enough timescale.
Statistics and scientific risk assessment are based on history, but the world is changing, and the past is no longer a valid guide to the future. Thousand year weather events are more and more frequent, while new technologies are reshaping the fundamental infrastructure of society. When the probabilities and the consequences of an accident are entirely unknowable, how can we manage risk?
One option is the precautionary principle, which says that until a product or process is proven entirely safe, it is assumed to be dangerous. The problem with the precautionary principle is that it is different in degree, not in kind. It demands extremely high probabilities of safety, but doesn’t solve the problem of tight coupling. Another solution is basing everything off of the worst possible case: what happens if the reactor explodes, or money turns into fairy gold. System which can fail in dangerous, expensive ways, are inherently unsafe and should be chosen in favor that have more local consequences. This solution has the twin problem of demarcating realistic vs fantastic risk, after all, Rube Goldberg scenarios starting with a misplaced banana peel might leads to the end of the world. The second problem is that this discounts ordinary, everyday risk. Driving is far more dangerous per passenger-mile than air travel, yet people are much more afraid of plane crashes. A framework based on worst-case scenarios leads to paralysis, because everything might have bad consequences, and prevents us from rationally analyzing risk. The end state of the worst-case scenarios is being afraid to leave the house because you might get hit by a bus.
So the ancient idea of danger no longer holds, because we can’t know that is dangerous anymore, and mere fear of the unknown cannot stand against the impulse to understand and transform through science and technology. Risk has been domesticated in error; a society built on risk continually gambles with its future.
The solution involved decoupling, building cut-outs into complex systems so they can be stopped in an orderly manner when they begin to fail, and decentralizing powerful, large-scale infrastructure. Every object in the world is bound together in the technological and economic network that we call the global economy. We cannot assume that it will function the way it has forever, rather we should trace objects back to their origins, locate the single points of failure, the places where large numbers of threads come together, and develop alternative paths around those failure points. Normal accidents are a fact of life, but there is no reason why they have to bring down people thousands of miles away from their point of origin.