Monday, May 4, 2026
Privacy-First Edition
Back to NNN
Technology

The Governance Gap Is Measured in Decades.

Every existing framework for governing artificial intelligence — reviewed honestly — reveals the same conclusion: the gap between AI capability and AI governance is widening, not narrowing.

I want to do something in this piece that almost nobody writing about AI governance does: I want to go through the actual existing frameworks honestly, evaluate what each one does and does not accomplish, and arrive at a conclusion based on evidence rather than hope.

The conclusion will not be comfortable. But I think you deserve the honest version.

The EU AI Act.

The European Union's Artificial Intelligence Act is the most comprehensive AI governance legislation passed by any major jurisdiction. It establishes a risk-based framework — categorizing AI applications by their potential for harm and imposing corresponding requirements. High-risk applications in areas like biometric identification, critical infrastructure, employment, and education face the most stringent requirements.

What it accomplishes: it establishes the principle that AI applications should be governed according to their risk profile, creates disclosure requirements for AI-generated content, and prohibits certain applications outright — including social scoring systems and real-time biometric surveillance in public spaces.

What it does not accomplish: it contains a national security exemption that is broad enough to swallow significant portions of its protective intent. It will take years to implement and is already partially outdated by the pace of capability development. It applies only within the EU, while the AI systems it is attempting to govern are global. And its enforcement mechanism depends on national regulatory bodies that do not yet have the technical expertise to evaluate compliance effectively.

The US Executive Order on AI.

The Biden administration's October 2023 executive order on AI was substantive — it required safety testing and reporting for frontier models, established standards for watermarking AI-generated content, and directed federal agencies to develop AI governance frameworks. It was the most serious executive action on AI taken by any US administration.

What it accomplishes: it signals that the federal government takes AI risk seriously and directs significant resources toward understanding and mitigating it.

What it does not accomplish: executive orders are not law. They can be and have been rescinded by subsequent administrations. The voluntary commitments extracted from AI companies are voluntary. There is no enforcement mechanism with real teeth. And the order did not address the fundamental economic incentive structure that drives unsafe AI development.

Industry self-governance.

The major AI laboratories have all published safety frameworks, responsible scaling policies, and commitments to various governance practices. The Frontier Model Forum, established in 2023, attempts to coordinate safety practices across major developers.

What it accomplishes: it establishes norms within the leading AI development community and creates some reputational accountability for violations of stated commitments.

What it does not accomplish: voluntary industry self-governance has a consistent historical record of being insufficient for technologies with significant externalities. The tobacco industry self-governed. The financial industry self-governed before 2008. The social media industry self-governed before the Senate hearings about child safety. In each case, the externalities were not adequately addressed by voluntary commitments until regulatory or legal force was applied. There is no reason to believe AI will be different.

The honest conclusion.

The governance gap is not a temporary condition that will resolve itself as regulators catch up. It is a structural feature of a situation in which the people with the most resources and the most influence over AI development have a financial interest in minimizing governance constraints, and the people with the authority to impose those constraints lack the technical understanding, the political will, and the international coordination required to do so effectively.

I am building NeuraWeb as part of the answer — not because I think a single platform can solve a problem of this scale, but because the human identity infrastructure that responsible AI governance requires has to be built by someone, and the people with the financial interest in building it the right way are not the people currently building it.

The gap is measured in decades. We do not have decades to close it.

S. Vincent Anthony is the founder of NeuraWeb Global Inc. This is part six of an ongoing series.

The Perspectives

0 verified voices · Three viewpoints · Real discourse

Left
0
Be the first to share a left perspective
Center
0
Be the first to share a center perspective
Right
0
Be the first to share a right perspective

Related Stories