Monday, May 4, 2026
Privacy-First Edition
Back to NNN
Technology

What I Got Right. What Scared Me More Than I Expected.

A year-end reflection on nine months of writing about AI — what developed faster than predicted, what was worse than anticipated, and what, if anything, gives reason for something other than alarm.

I started writing this series in March. It is now December. I want to take stock honestly — what I got right, what I got wrong, and what scared me more than I expected when I started.

What I got right.

The pace. I wrote in March that the pace of AI development was not normal — that it was moving faster than any technology transition I had watched in decades of building things. Nine months later, that observation looks, if anything, understated. The capability jumps between March and December 2025 were larger than I predicted. The accessibility increased faster than I expected. The integration into everyday products moved from significant to ubiquitous in the time I have been writing.

The governance gap. I wrote that the gap between AI capability and AI governance was widening. Nine months of watching regulatory processes, industry commitments, and international coordination attempts has not changed that assessment. The gap is wider in December than it was in March. The trajectory has not reversed.

The ownership problem. I wrote that the values of today's AI founders would not bind tomorrow's AI controllers. In the months since, there have been multiple documented cases of AI companies shifting their stated safety commitments under commercial pressure. The pattern I described is operating on the timeline I expected.

What scared me more than I expected.

The speed of capability normalization. I expected significant AI capability advances. I did not fully anticipate how quickly those advances would be normalized — how rapidly the public would adjust its expectations and its behavior to incorporate AI-generated content, AI-mediated decisions, and AI-assisted work as unremarkable features of daily life. Normalization happens faster than governance, and that gap is dangerous.

The concentration. The major AI capabilities are more concentrated among fewer actors than I predicted. The compute requirements for frontier model training have not democratized as quickly as some predicted — if anything, the economics are consolidating further toward a small number of hyperscale operators. When the most powerful AI systems in the world are controlled by three or four companies, the governance question becomes: who governs those companies?

The children. I wrote about the child safety problem in May. In the months since, the documented cases of harm have multiplied. The legislative response has been, charitably, inadequate. The platforms have made voluntary commitments that have not materially changed the outcomes. The canary is still dying.

What gives me something other than alarm.

The awareness is growing. More people are asking the questions I have been asking. More journalists are writing seriously about AI governance. More policymakers are developing genuine technical understanding. The conversation is not at the urgency level the situation requires — but it is more serious in December than it was in March.

NeuraWeb is growing. The Founding Architect window is filling. The identity infrastructure I have been building is further along than it was at the start of the year. The platform that I believe is part of the answer is more real than it was nine months ago.

I will keep writing. The series continues in January. There is more to say.

S. Vincent Anthony is the founder of NeuraWeb Global Inc. This is part nine of an ongoing series.

The Perspectives

0 verified voices · Three viewpoints · Real discourse

Left
0
Be the first to share a left perspective
Center
0
Be the first to share a center perspective
Right
0
Be the first to share a right perspective

Related Stories