I want to be careful in this piece. I am not making claims about the outcome of any election. I am not arguing that AI disinformation changed who won. I am documenting what AI tools made possible during the 2024 election cycle — and what the absence of any effective countermeasure tells us about the elections that will follow.
The 2024 election cycle was, in a specific and measurable sense, a test. It was the first major democratic exercise conducted in the presence of AI tools capable of generating photorealistic video of real people saying things they never said, synthetic audio indistinguishable from real recordings, coordinated inauthentic behavior at a scale that overwhelmed platform detection systems, and targeted disinformation campaigns personalized to individual voters using data harvested from social platforms.
We failed the test. Not because the wrong person won. Because the infrastructure for verifying what was real did not exist, and we made no serious effort to build it.
What actually happened.
In January 2024, New Hampshire voters received robocalls featuring a synthetic voice that sounded like President Biden telling them not to vote in the primary. The call was traced to a political consultant who was subsequently indicted. The indictment came after the primary.
In the weeks before the election, synthetic video clips depicting candidates saying things they had not said circulated on social media platforms. Some were labeled as AI-generated. Many were not. The platforms' detection systems flagged some of them. Many they missed. The velocity of distribution consistently outpaced the velocity of fact-checking.
Foreign state actors — documented in intelligence community assessments — used AI tools to generate and amplify disinformation targeted at specific demographic groups in specific swing states. The targeting was precise. The content was personalized. The scale was beyond what any human-operated disinformation campaign could have achieved.
Why the countermeasures failed.
The primary countermeasure deployed by social media platforms was content moderation — human and algorithmic review of flagged content, removal of material determined to violate platform policies, and labeling of AI-generated content where detected.
This approach has a fundamental structural problem: it is reactive. It responds to content after it has been created and distributed. In a media environment where content can reach millions of people in minutes, reactive moderation is always too slow. The disinformation completes its damage before the correction arrives. And corrections, when they come, reach a fraction of the audience that the original disinformation reached.
The approach I am building at NeuraWeb addresses the problem at the source rather than the symptom. Content produced within the NeuraWeb architecture is attributable to a verified identity. Synthetic content cannot be presented as authentic content from a known person, because the known person's identity is verified and the content's origin is traceable. This does not prevent disinformation from existing. It makes disinformation attributable — which is the precondition for accountability.
What the next election will look like.
The AI tools available in 2024 were the least capable AI tools that will ever be used in an election. The tools available in 2026 will be more capable. The tools available in 2028 will be more capable still. The trajectory of capability development is steep and consistent.
The governance frameworks for AI-generated electoral disinformation are not keeping pace with that trajectory. Voluntary watermarking commitments by AI companies are not enforceable. Platform moderation at scale is reactive and insufficient. Regulatory frameworks are years behind the technology they are attempting to govern.
The election was a test. The test revealed that democracy, as currently architected for the AI era, is vulnerable in ways that no serious countermeasure is currently addressing. That is not a partisan observation. It is a structural one.
S. Vincent Anthony is the founder of NeuraWeb Global Inc. This is part eight of an ongoing series.