I want to start this piece with a specific observation rather than a general claim.
In October 2025, OpenAI released a model — internally designated o3 — that achieved performance on a suite of graduate-level reasoning tasks that previous models had failed at comprehensively. The jump in performance was not incremental. It was qualitative. Researchers who had been tracking AI capability for years described it as a discontinuity — a step change rather than a point on a continuous curve.
In December 2025, Google released Gemini 2.0, with multimodal capabilities — text, image, audio, video, and code, integrated — that had been predicted to be years away. In January 2026, Anthropic released Claude 4 with reasoning capabilities that represented another significant jump from its predecessor.
Three qualitative capability jumps in four months. This is the acceleration.
What qualitative jumps mean.
The difference between incremental and qualitative capability changes matters for governance. Incremental improvements are easier to accommodate — existing frameworks can be extended, existing safeguards can be updated, existing risk assessments can be revised. Qualitative jumps create entirely new risk profiles that existing frameworks were not designed for.
A model that can reason at the graduate level across multiple domains is not just a better version of a model that could not. It is a different kind of tool with different kinds of applications, different kinds of misuse potential, and different kinds of governance requirements. The governance framework that was adequate for GPT-3 is not adequate for o3. The framework that was adequate for o3 is not adequate for whatever comes in six months.
The slope of the capability curve is steeper than the slope of the governance curve by a margin that is now critical. I have been saying this for nine months. The margin has not narrowed. It has widened.
The concentration problem is getting worse.
The compute required to train frontier models is concentrating further. The number of organizations capable of training models at the frontier capability level is shrinking, not growing. This is counterintuitive — most technology democratizes over time, becoming more accessible as the economics improve. AI training compute is not following this pattern at the frontier. The models that represent the greatest capability — and the greatest risk — are accessible to fewer actors than ever, while the models available to everyone are rapidly approaching capabilities that were recently frontier.
This creates a two-tier landscape: a small number of organizations with frontier capability and enormous influence over the direction of AI development, and a vast ecosystem of applications built on open or accessible models that are approaching the capabilities those frontier organizations held eighteen months ago. Both tiers present governance challenges. Neither tier is being governed adequately.
What I am watching in 2026.
Three things will determine whether the governance gap begins to close or continues to widen. First: whether the major AI laboratories honor their stated commitments to pause or gate deployment of capabilities that exceed defined risk thresholds. So far, commercial pressure has consistently won when it has conflicted with safety commitments. The trend is not encouraging.
Second: whether the international coordination effort around AI governance produces anything with enforcement mechanisms rather than voluntary commitments. The G7 Hiroshima process produced principles. Principles without enforcement are suggestions.
Third: whether the human identity infrastructure required for accountable AI deployment gets built — by someone, with the right values, at the right scale. This is what I am building. I am doing it because I do not see anyone else doing it with the architectural commitments it requires.
The acceleration is not slowing down. The governance is not keeping pace. That gap is now critical. I will keep writing.
S. Vincent Anthony is the founder of NeuraWeb Global Inc. This is part ten of an ongoing series.