The nuclear weapons analogy is the most common framework people reach for when trying to explain why AI requires serious governance. The argument goes: nuclear weapons were a transformative technology with catastrophic potential, and the world developed — imperfectly but meaningfully — a governance framework to constrain their proliferation. We should do the same for AI.
I understand the appeal. The analogy captures something true. Both technologies represent qualitative jumps in the potential for harm. Both have legitimate beneficial applications alongside catastrophic ones. Both require international coordination to govern effectively. The non-proliferation treaty framework, for all its flaws, has contributed to a world in which the number of nuclear-armed states has grown much more slowly than many predicted in the 1950s and 60s.
But the analogy breaks down in ways that are more frightening than the similarities are reassuring.
What the nuclear analogy gets right.
Nuclear weapons created a recognition that some technologies require governance frameworks that transcend national sovereignty. No single nation can effectively govern nuclear weapons proliferation through domestic policy alone — the technology is too consequential and the incentives to defect from non-proliferation are too strong without international coordination. The same is true of AI.
The nuclear framework also established the principle that some capabilities require verification mechanisms — inspection regimes, satellite monitoring, test ban treaties — rather than relying on voluntary compliance. This principle is correct and directly applicable to AI. Voluntary commitments by AI companies to safety practices are insufficient governance for a technology of this consequence.
What the nuclear analogy gets dangerously wrong.
Nuclear weapons require physical infrastructure that is impossible to hide at scale. Uranium enrichment facilities require enormous industrial capacity. Plutonium production requires reactors. Testing requires remote locations and produces seismic signatures detectable globally. The physical requirements of nuclear weapons development create natural friction — the kind of friction that makes covert development difficult and covert deployment at scale nearly impossible.
AI requires a server, an internet connection, and code. The compute requirements for training frontier models are still significant — but they are declining. The knowledge required to fine-tune an existing model for harmful purposes is increasingly accessible. The physical signature of AI development is essentially zero. There is no seismic event when someone fine-tunes a language model. There is no satellite image of an enrichment facility. There is no inspection regime that could meaningfully monitor AI development across the hundreds of entities currently working on it.
The second critical difference is replication. A nuclear weapon, once built, exists in one place. It can be seized, destroyed, contained. An AI model, once trained, can be copied in seconds. It can be distributed globally before any authority becomes aware of its existence. The proliferation dynamics of digital information are the opposite of nuclear proliferation dynamics in every relevant way.
The window for effective governance is closing faster than anyone is acknowledging.
With nuclear weapons, the window for establishing effective governance frameworks was measured in decades. The first test was 1945. The first comprehensive non-proliferation treaty was 1968 — twenty-three years later. That was enough time, barely, to establish a framework before the technology proliferated beyond governable limits.
With AI, I do not believe we have twenty-three years. The capability curve is too steep. The accessibility is increasing too fast. The economic incentives driving development are too strong. The governance frameworks being proposed are too slow.
I do not say this to produce despair. I say it to produce urgency. The nuclear analogy is useful for making the case that AI requires serious governance. It is dangerous if it makes us think we have the same amount of time that nuclear governance took. We do not.
S. Vincent Anthony is the founder of NeuraWeb Global Inc. This is part five of an ongoing series.