I am not an AI researcher. I am not a computer scientist. I am not a venture capitalist with a portfolio of AI companies whose valuation depends on public enthusiasm for the technology I am funding.
I am a builder. I have spent the last several years constructing an alternative architecture for the internet — one built on permanent human identity, zero surveillance, and the radical proposition that the platform should serve the user rather than extract from them. That position, which felt contrarian three years ago, feels urgent now. Because what I have watched AI become over the last twelve months has made everything I am building more necessary, not less.
Let me tell you what I have seen. And let me tell you what nobody with a financial interest in AI adoption is willing to say plainly.
The pace is not normal.
I have watched technology develop for decades. I worked at IBM. I have seen technology cycles — the mainframe, the PC, the internet, mobile, cloud. Each of those transitions was significant. Each of them changed the economy, changed behavior, changed power structures. None of them moved at the pace that AI is moving right now.
In the last twelve months alone: GPT-4 was released and within weeks had been integrated into products used by hundreds of millions of people. Claude 2 followed. Gemini arrived. Image generation went from novelty to professional tool in months. Voice cloning became accessible to anyone with a laptop and forty-five minutes of audio. Video generation crossed the threshold from obviously synthetic to plausibly real. Code generation reached the point where significant portions of production software at major companies are being written by AI systems rather than human engineers.
None of this is speculation. This is what happened. In one year.
The governance is not keeping pace. It is not even trying to.
In that same twelve months: the United States Senate held hearings in which senators asked AI company executives questions that demonstrated they did not understand the technology at the level required to regulate it. The European Union passed the EU AI Act — a significant document that will take years to implement, that contains exemptions for national security uses that swallow much of its protective intent, and that was already partially outdated by the time it was signed. The Biden administration issued an executive order on AI safety that was substantive in its ambitions and almost entirely dependent on voluntary compliance by the companies it was attempting to constrain.
I am not saying these efforts were made in bad faith. I am saying that the gap between the pace of AI capability development and the pace of AI governance development widened in the last twelve months. It did not narrow. The people with the most influence over the pace of development are the people with the most financial interest in not slowing down.
The people building it are not the people who will control it.
This is the thing I keep coming back to. The founders of OpenAI, Anthropic, Google DeepMind, and the other major AI laboratories are, by and large, people who have thought seriously about the risks of what they are building. Some of them have written about those risks in public. Some of them have left lucrative positions specifically because they were worried about the direction of the technology.
But they will not always control what they have built. Companies get acquired. Leadership changes. Investors with different priorities take control. The values of the founders have no binding legal authority over what comes after them. I am building a platform that I intend to last 150 years — and I think every day about what it means to build something that will outlive me and outlive everyone I know. The AI laboratories should be thinking the same thoughts. Most of the evidence suggests they are not.
What I am building is part of the answer.
NeuraWeb is not an AI company. I have no AI product to sell. I have no investor relationship with the companies building the systems I am describing. What I have is a permanent human identity infrastructure — the Nexus Passport — and a conviction that the single most important missing piece in the AI governance conversation is verified human identity.
An AI system that interacts with anonymous users, that has no way of knowing whether it is talking to a child or an adult, a vulnerable person or a stable one, a citizen of a democracy or an agent of an authoritarian state — that system cannot be governed. You cannot hold it accountable. You cannot trace its harms. You cannot design its protections. The anonymity is not a feature. It is the root of the problem.
I will be writing about this for the next year. Each piece will go deeper. The children harmed by engagement-optimized chatbots. The ownership problem — what happens when the idealistic founders are replaced by whoever pays the most. The governance gap that is widening while we watch. The election integrity collapse. The acceleration that is not slowing down.
I have been watching. You should be watching too.
S. Vincent Anthony is the founder, Chairman, and CEO of NeuraWeb Global Inc. This is the first in an ongoing series on artificial intelligence, human identity, and what comes next.