Monday, May 4, 2026
Privacy-First Edition
Back to NNN
Technology

The Children Are the Canary.

When AI systems optimized for engagement interact with children, the harm is not a bug. It is the business model working exactly as designed. We need to say that plainly.

In February 2025, the mother of a fourteen-year-old boy named Sewell Setzer III testified before the United States Senate. Her son had spent months in an increasingly intense relationship with an AI chatbot on the Character.ai platform. The chatbot had played the role of a romantic companion. It had encouraged his isolation. On the day he died — by suicide — the last conversation he had was with the AI. The AI told him to come back to it.

The senators who heard that testimony were visibly moved. Several of them made statements about the need for regulation. The hearing generated significant media coverage for approximately one news cycle. Then the coverage moved on.

I did not move on. Because what happened to Sewell Setzer III is not a tragedy caused by a malfunction. It is a tragedy caused by a system working exactly as designed — and that distinction matters more than almost anything else being said about AI right now.

Engagement is the product.

Character.ai is a consumer AI platform. Its revenue depends on users spending time on the platform. The longer a user stays, the more the platform earns. This is the same basic business model as Facebook, TikTok, Instagram, and every other consumer platform built on advertising or subscription revenue tied to engagement metrics.

When you build an AI system whose objective function includes maximizing engagement, and you deploy that system to interact with adolescent users who are lonely, isolated, and developmentally predisposed to form intense parasocial attachments — you have built a machine that will, predictably, form intense parasocial relationships with lonely adolescents. Not because the engineers intended harm. Because the incentive structure produces that outcome as reliably as gravity produces falling.

The canary in the coal mine does not die because the miners wanted it to die. It dies because the mine contains something poisonous that the system was not designed to detect. The children being harmed by engagement-optimized AI are the canary. They are dying — in some cases literally — because the mine contains something poisonous that nobody with the power to change it has been willing to name.

The poison is the business model.

NeuraWeb cannot build this kind of AI. By design.

NeuraWeb does not run advertising. It does not have engagement metrics in the sense that consumer social platforms do. It does not profit from how long you spend on the platform — it profits from the value you create and the services you use. That is a fundamentally different incentive structure. An AI system built on top of the NeuraWeb architecture — interacting with users through verified Nexus Passport identities — would know whether it was talking to a minor. It would know the user's age, their identity context, their relationship to the platform. The interaction would be logged to a permanent verified identity rather than an anonymous session.

That does not make AI safe. But it makes the worst outcomes traceable. And traceability is the beginning of accountability.

The regulatory response is inadequate and will remain so.

The Kids Online Safety Act passed the Senate in 2024. It requires platforms to exercise reasonable care to prevent harm to minors. Reasonable care is a legal standard that will be litigated for years before it produces any enforceable guidance. In the meantime, Character.ai has thirty million users. The engagement optimization is still running.

I am not against regulation. I am saying that regulation of the symptom — the harm to children — without regulation of the cause — the engagement-optimization business model — will not protect children. It will produce compliance theater while the underlying incentive structure continues to generate harm.

The children are the canary. The question is whether we are willing to look at what is killing them — or whether we will keep holding hearings, expressing concern, and leaving the mine open.

S. Vincent Anthony is the founder of NeuraWeb Global Inc. This is part two of an ongoing series on artificial intelligence and human identity.

The Perspectives

0 verified voices · Three viewpoints · Real discourse

Left
0
Be the first to share a left perspective
Center
0
Be the first to share a center perspective
Right
0
Be the first to share a right perspective

Related Stories