On fast bombs, slow bombs and integral accidents

How we think of risk influences how we prepare

AI SAFETY

3/2/20253 min read

In the readings for my recent AI safety course, I was struck by the frequent comparisons made between the frontier labs building transformative AI now and the scientists who created the atomic bomb.

Is this an apt analogy?

Let’s think about what it conveys, whether directly or by association:

  • the size and resources involved

  • the effort as a scientific and technical collaboration

  • the scale of the potential risks

  • the geopolitical implications

Those ideas are all useful in their way for making AI risk more concrete. But at the same time, the analogy is limited. In some ways, I would argue that comparing transformative AI to the atom bomb is even misleading, for three main reasons:

  1. It frames the war as a kind of argument where dropping the atomic bomb was like having the final word

  2. It implies an event that is localised in time and space

  3. It draws a nice neat line from intent to outcome


AI is none of those things, neither a conversation-stopper nor a localised event. It resists binaries. Both as a technology and as a concept, it is not easily contained.

So if AI is a bomb, it’s a slow bomb. In my mind it’s more like the “Green Revolution” in agriculture that introduced chemical fertilisers and pesticides on a mass scale around the world. Production of monoculture commodity crops rocketed, albeit at the cost of depleted soils and great loss of biodiversity in the environment, along with malnutrition and chronic disease among people, as global diets shifted toward nutrient-poor carbohydrates.

The Green Revolution was a slow bomb because its harmful effects unfolded over a longer timescale than its beneficial ones. The harms were unintended, diffuse and both more widely and unevenly distributed than the localised short-term productivity gains.

Now, I say all of this knowing it is unusual and possibly even heretical to call the Green Revolution a bomb (even a slow one). But I think it is important to be able to criticise technology, in the sense of critical thinking. When you go beyond judging something as good or bad, you can find the creative tension of its contradictions.

I like something that philosopher of technology Paul Virilio once said in an interview:

“Every technical object contains its own negativity. It is impossible to invent a pure, innocent object, just as there is no innocent human being. It is only through acknowledged guilt that progress is possible. Just as it is through the recognised risk of the accident that it is possible to improve the technical object.”

In Virilio’s view, the accident is endemic to the acceleration of modernity. At each stage of progress, there is an accident that exposes the shadow side of that particular technology, such as factory accidents, automobile, rail or plane crashes, or nuclear meltdowns.

Those examples fall in the category of industrial accidents, which are localised events. In contrast, he says the post-industrial accident goes beyond place and time, and “becomes an environment.” When such an accident unfolds on a global scale, it is what Virilio calls “an integral accident” because it sets off other accidents and affects everyone.

Transformative AI has the potential to create an integral accident because it depends on and amplifies the effects of scale, complexity and interconnectedness. Yet I think that possibility can spur us to positive action.

Accidents are empirical. They happen. And when we accept that they happen for any number of reasons, it’s possible to be curious about what happened and how, rather than looking for a single cause, or a person to blame.

I know that the alignment problem has both technical and philosophical dimensions, but I worry that it becomes even more intractable when we require the fuzzy filings of intention to line up around an a priori moral magnet. Whose values could exert such a magical attraction? And what if something still went wrong with the “right” output or outcome?

The basic understanding I have of how safety cultures have been established in other industries, such as in aviation and health care, is that it requires an acknowledgement and awareness of risks, a sense of trust among the people working in that space and a commitment to objective reporting.

If we could establish that kind of safety culture for AI, I think we could start a conversation about how to prepare in practical ways for whatever might happen, whether it’s a fast bomb or a slow bomb, or an integral accident that becomes the new environment we have to adapt to.