Sam Altman was fired from OpenAI in November 2023. He was reinstated five days later. But for those five days, the most powerful AI laboratory in the world was controlled by a board that the public knew almost nothing about, making decisions that the public had no input into, about a technology that affects everyone.
That is not a hypothetical. That happened.
Now extend the timeline. Not five days. Five years. Twenty years. Fifty years. The AI systems being built right now will still exist in some form in fifty years. The companies building them will not look the same. The people controlling them will not be the same. The values guiding their development will not be the same.
Who owns the machine owns the future. And we have no binding mechanism to ensure that the future owner shares the values of the current one.
The history of technology companies is not reassuring.
Google's founding motto was "Don't be evil." It became a punchline within a decade as the company built one of the most comprehensive behavioral surveillance systems in human history. Facebook was built in a dorm room by someone who genuinely believed he was connecting the world. What he built was a global platform for the industrialization of outrage, political manipulation, and the commodification of human attention.
I am not saying the founders of these companies were liars. I am saying that the gap between the values of founders and the behavior of mature, publicly traded, shareholder-accountable companies is well-documented and consistent. The idealism of the founding moment does not survive contact with the incentive structure of scale.
Now apply that pattern to AI systems of increasing capability, controlled by companies that will inevitably evolve, be acquired, go public, change leadership, or be captured by state actors with different values than the ones the founders held.
The Hitler problem.
I want to name something that people are reluctant to name because it sounds extreme. It is not extreme. It is the logical endpoint of a trajectory that is already underway.
An AI system of sufficient capability, controlled by an entity with authoritarian intent, is the most powerful tool for population control ever built. It does not require secret police. It does not require gulags. It requires the ability to monitor communication, control information flow, identify dissent, and respond to it — all of which AI systems can do at a scale and speed that no human institution can match.
We are building those systems right now. We are building them under governance frameworks that assume the controllers will share democratic values. Those frameworks have no enforcement mechanism for the scenario in which they do not.
Adolf Hitler came to power through democratic elections. So did Victor Orbán. So did Recep Tayyip Erdoğan. The transition from democratic legitimacy to authoritarian control has happened within single decades in multiple countries within living memory. The question of what an authoritarian leader with control of a mature AI system could do is not a science fiction question. It is a near-future policy question that we are not having.
The ownership chain has no binding values.
When a company is acquired, its assets transfer to the new owner. Those assets include the AI systems, the training data, the fine-tuning work, the safety research, the alignment techniques — everything. The new owner is not bound by the values commitments of the previous owner. They are bound by law — but law is made by governments, and governments change, and some governments are not democracies.
I am building NeuraWeb to last 150 years. The Nexus Passport — the permanent human identity at the core of the platform — is specifically designed to be irrevocable even by NeuraWeb itself. The identity belongs to the user, not to the platform. That architectural choice is a values choice. It is designed to survive changes in ownership, in leadership, in the political environment.
The AI laboratories are not making equivalent architectural choices. The safety constraints on their systems are tunable parameters, not structural guarantees. The values embedded in their training are reversible with sufficient compute and the right fine-tuning dataset. The governance frameworks protecting against misuse are policy documents, not cryptographic proofs.
Who owns the machine owns the future. Right now, idealistic founders own the machine. They will not always. We are building the infrastructure of that future without any serious plan for what happens when the ownership changes.
S. Vincent Anthony is the founder of NeuraWeb Global Inc. This is part three of an ongoing series.