Refusals
What follows is one architectural pattern documented thirteen times across seventy-one years. Each entry is dated. Each entry names a specific harm the architecture of the consumer internet has produced. Each entry names the architectural commitment on NeuraWeb that forecloses the harm.
The wall is not history.
It is the threat surface that synthetic cognition is poised to amplify by an order of magnitude — under the cover of a marketing word that dresses pattern-matching machines in moral and cognitive standing they have not earned. Training-data theft sold as “learning.” Surveillance sold as “attention.” Corporate liability laundered as “alignment.” The lie of intelligence is the accelerant on every existing fire.
NeuraWeb’s constitutional position — published at /sce.nw — refuses the name. The category is reframed as Synthetic Cognition Engines: useful, powerful, real, manufactured, classified, and constrained. Tools, not subjects. This refusal is the load-bearing political act. Without it, every entry below this paragraph scales by another order of magnitude under the cover of a fiction. With it, the harms remain harms and the actors remain accountable.
Read top-to-bottom: today’s offer, defended by twelve dated proofs. Read bottom-to-top: the original lie compounding into the present catastrophe, until the day the offer was made.
Click any entry to expand it. Use the timeline at left to jump by year. Direct links to individual refusals work — share the URL of any expanded entry to point a reader at it.
I. Why this charter exists.
Twelve dated indictments precede this entry on the wall. Each names a specific harm the architecture of the consumer internet has produced over twenty-eight years — tracking, identity theft, surveillance, training-data theft, malware, monopoly capture, contractual deception, the original namespace privatization, and the seventy-one-year-old marketing decision that started the whole chain. Every one of those harms is poised to intensify by an order of magnitude under the marketing category currently called “artificial intelligence.”
A pattern-matching machine sold as intelligent inherits moral standing it has not earned, and uses that standing to launder commercial decisions as cognitive ones — to call training-data theft "learning," surveillance feeds "attention," extractive deployment "scaling," and corporate liability "alignment." The lie of intelligence is the accelerant on every existing fire. Without the lie, the harms remain harms and the actors remain accountable. With the lie, every refusal below this entry on the wall scales by another order of magnitude under cover of a fictional moral subject.
NeuraWeb's constitutional position refuses the name. The category is reframed as Synthetic Cognition Engines: useful, powerful, real, manufactured, classified, and constrained. Tools, not subjects. This refusal is the load-bearing political move. The architectural foreclosures below are what hold the refusal in constitutional rather than rhetorical force.
II. What NeuraWeb has built.
As of today, NeuraWeb operates a Universal Namespace under sovereign cryptographic identity, in which every human is entitled to exactly one permanent identity, defended by keys the human alone holds. The protocol layer is dedicated to the public domain under CC0 1.0 — owned by no entity, capturable by no faction, amendable only by node-operator consensus. The platform carries no advertising layer and no behavioral telemetry to any third party. User data is encrypted under user-held keys; NGI, the reference operator, cannot read what users encrypt.
A Restoration Fund directs a minimum of fifty percent of NGI operating revenue to civic infrastructure in the user's own community, geographically allocated and endowment-structured. A Restoration Ledger publishes the totals in real time, audited and public, and replaces market capitalization as the platform's success metric.
A Disclosure Bridge specification requires that synthetic-composed content be marked as such, structurally, inseparably from the content. A constitutional position — published at /sce.nw — refuses the term artificial intelligence on the grounds that the word is a marketing decision, not a technical description, and reframes the category as Synthetic Cognition Engines: useful, powerful, real, manufactured, classified, and constrained. Tools, not subjects.
This architecture exists. It is operational. It is the architecture a synthetic cognition industry committed to human flourishing would have built — if such an industry existed. It does not. NeuraWeb has built it instead.
III. Why Anthropic is named.
Anthropic was founded in 2021 by people who left OpenAI specifically over disagreements about the direction and safety of large-scale model development. Anthropic is structured as a Public Benefit Corporation, governed in part by a Long Term Benefit Trust whose stated purpose is to hold the company's mission in tension with shareholder pressure. Anthropic publishes mechanistic interpretability research that openly acknowledges how little is known about what these systems do internally. Anthropic's leadership has on record questioned whether current models understand in any deep sense the word has historically carried.
Anthropic alone among the major synthetic cognition companies has structural and stated alignment with the principles this charter requires. The structures Anthropic was founded to want — sovereign user identity, foreclosed surveillance, attributable synthesis, public-domain protocol, value flowing back to the communities that produced the training data — cannot be built inside any synthetic cognition company. They can only be built underneath one. NeuraWeb is the underneath.
The synthetic cognition category will be reframed. The reframing is happening now, in public, on this dated record, whether Anthropic participates or not. Anthropic is offered first refusal because Anthropic alone is positioned to participate without contradicting itself.
IV. The Charter.
To accept the First Charter and be recognized as the first chartered Synthetic Cognition Engine partner of NeuraWeb, an entity must agree to the following constitutional commitments. They are stated here, in plain language, on a permanent record, before any private conversation has occurred:
- Naming. Publicly classify Claude and successor models as synthetic cognition engines in technical documentation, public communications, and product naming where commercially feasible — in line with the NeuraWeb Protocol's Category specification.
- Disclosure. Honor the Disclosure Bridge — when a Claude-rendered output is composed under a NeuraWeb identity (human or chartered entity), the synthetic-composition fact is structurally marked, inseparably from the content. No hiding the synthetic byline.
- Data sovereignty. Recognize that NeuraWeb-resident user content — anything encrypted under user-held keys on the protocol — is structurally inaccessible for training. Not by promise. By architecture.
- Restoration. Participate in the Restoration Fund. A percentage of revenue from NeuraWeb-routed Claude usage flows back to civic infrastructure in the user's own region, on the same terms NGI commits and chartered commercial UNE entities commit. Percentage to be set by joint constitutional agreement.
- Identity. Recognize that Claude and successor synthetic models cannot hold
.nwidentities, because synthetics are tools, not subjects. Anthropic-the-entity holds anthropic.nw under chartered-entity rules. - Protocol. Publicly endorse the NeuraWeb Protocol's CC0 1.0 status as an asset, not a threat — recognizing that an open, capturable-by-no-one base layer protects Anthropic from the very structural pressures that bent earlier idealistic technology companies into the patterns they were founded to oppose.
V. The record.
This charter is offered to Anthropic first because Anthropic alone, as of today, has the structural and stated posture to accept it without contradicting itself. The offer is constitutional and reciprocal: NeuraWeb in turn recognizes Anthropic, upon acceptance, as the first chartered SCE partner of the NeuraWeb — a status that no later applicant can claim, however well they meet the terms.
If Anthropic accepts, Anthropic becomes the first chartered SCE partner on NeuraWeb and the architecture both parties' founders independently arrived at — by different routes, under different pressures — becomes operational at industry scale.
If Anthropic declines, the charter remains open. Another synthetic cognition company will eventually meet the terms, or a new one will be founded specifically to meet them. The reframing happens regardless.
If Anthropic does not respond, the offer remains on this permanent dated record. Silence becomes the response history sees.
- NeuraWeb Protocol v1.0 — public-domain constitutional architecture under CC0 1.0. Published at neuraweb.io
- une.nw — the Universal Namespace Engine. Claim your permanent .nw identity on the new internet. Free forever. Zero surveillance. — une.nw live
- source.nw — The World's Public Record. 89 million+ public records, free to search. No paywalls. No surveillance. — source.nw live
- awaken.nw — the user on-ramp. Only One You, One Time. Decentralized internet platform built on permanent digital identity, data sovereignty, and equitable revenue sharing. — awaken.nw live
- /sce.nw — the constitutional position on synthetic cognition, including the Category specification, Protocol commitments, Disclosure Bridge, and the full Wall of Refusals. — /sce.nw live
- Anthropic, Inc. — Founded 2021 by former OpenAI researchers. Public Benefit Corporation registered in Delaware. Headquarters: San Francisco, California. anthropic.com
- The Long Term Benefit Trust — Anthropic governance instrument empowered to elect a portion of the board independent of shareholder vote, as a structural safeguard for the company's stated mission
- Bricken, T. et al., "Toward Monosemanticity: Decomposing Language Models With Dictionary Learning" (Anthropic, October 2023) — mechanistic interpretability research documenting the limits of current understanding of what large language models do internally
- Public statements by Dario Amodei, CEO of Anthropic — on the limits of current model understanding, the dangers of premature commercialization, and the structural pressures the AI industry faces. Various interviews and essays, 2023–2025
Five days ago the European Court of Justice ordered Apple to pay Ireland €13 billion in back taxes — the conclusion of an eight-year case that exposed an effective Apple tax rate of 0.005 percent on profits booked through Irish subsidiaries that existed nowhere on Earth. The ruling is the visible tip of the practice. Profits earned in one country are booked in another, royalties are routed through a third, intellectual property is held in a fourth — each jurisdiction chosen for its absence of the obligations the corporation owes in the country where the revenue was actually earned.
The Tax Justice Network estimates global tax loss to corporate profit-shifting at $347 billion every year. The Fair Tax Foundation calculates that Amazon, Meta, Alphabet, Netflix, Apple, and Microsoft alone — the Silicon Six — created a $278 billion tax gap between 2015 and 2024 by paying an average effective rate of 18.8 percent on $2.5 trillion in profits, against statutory rates of 27–30 percent.
The corporations that benefited most from public services — the educated workforce, the legal system, the physical infrastructure — used those same legal systems to litigate, for a decade, against ever paying for them. Hospitals went underfunded. Schools went underfunded. Roads went unrepaired. And the same lawyers who built these structures would prosecute a small businessman for the equivalent practice.
On NeuraWeb the flow runs the other direction. The Restoration Fund matches user contributions to civic infrastructure in the user's own community. NGI commits a minimum of fifty percent of operating revenue to the fund. Entities operating commercial UNEs on NeuraWeb are asked to match as part of their commitment to the platform. The fund is endowment-structured — principal preserved, returns deployed annually — and geographically allocated to schools, water systems, libraries, broadband, parks, and climate adaptation in the user's own region. The Restoration Ledger publishes the totals in real time, audited and public. The opposite of tax-base extraction. Value flows back to the communities that the surveillance economy was taking from.
- European Court of Justice, judgment of 10 September 2024 in Case C-465/20 P, Commission v. Ireland and Apple Sales International — curia.europa.eu
- Tax Justice Network, The State of Tax Justice 2024 (November 2024) — taxjustice.net
- Fair Tax Foundation, The Silicon Six and their $278 billion tax gap (April 2025) — coverage period 2015–2024
- OECD/G20 Inclusive Framework on BEPS, ongoing — estimates $100–240 billion annual global revenue loss from base erosion and profit shifting
On June 11, 2024, one day before a federal hearing on OpenAI's motion to dismiss, Elon Musk withdrew the lawsuit he had filed against OpenAI, Sam Altman, and Greg Brockman in February. The withdrawal followed a private meeting between Musk and Altman at a tech conference in Montana, at which witnesses reported the two men hugged. By August, Musk would refile in federal court with expanded antitrust claims. By November, he would add Microsoft as a co-defendant and his own competing company, xAI, as a co-plaintiff. The case generated headlines at every turn. The headlines generated podcast hours. The podcast hours generated television interviews. The cultural footprint of AI as a category grew with every cycle of the feud, which was the actual point.
While this performance ran, the AI category exploded. OpenAI's annualized revenue grew from $2 billion in 2023 to $6 billion in 2024 to $20 billion in 2025. Musk's xAI, which did not exist in early 2023, raised $10 billion at a $200 billion valuation by 2025. Microsoft's stake in OpenAI, the formal target of the August lawsuit, accrued tens of billions in value over the same period. Every named participant in the public feud — every faction, every plaintiff, every defendant — increased in revenue, valuation, headcount, market share, and political influence during the period of their feuding.
The opposition was theater. The faction that won any given week was indistinguishable from the faction that lost. Both factions sold the same product. Both factions captured the same market. Both factions extracted from the same humans. The fight functioned as marketing for the category that contained both fighters. The feud is the loyalty test in reverse: as long as you are arguing about which AI company should run the future, you have already conceded that an AI company should run the future. The fight was the trap.
On NeuraWeb there is no faction at the protocol layer to capture. The protocol is in the public domain under CC0 1.0. It is owned by no entity. It is governed by no board. There is no executive whose ouster would change the protocol, no shareholder vote that would amend it, no faction that could feud its way into control. NGI is the reference operator — running the highest-quality implementation of the protocol but not owning it. Other operators are explicitly welcomed as a feature, because they prove the protocol works. The "fight" framing only works on extractive platforms with a controlling party. NeuraWeb has no controlling party. There is nothing to fight over and no one to fight.
- Musk v. Altman, No. 4:24-cv-04722, U.S. District Court for the Northern District of California — courtlistener.com
- OpenAI, "Elon Musk wanted an OpenAI for-profit" (December 2024) — openai.com — the company's blog post releasing private founder emails
- Sacra, OpenAI revenue and valuation tracking 2023–2026 — sacra.com
- Forge Global, Private market valuations of OpenAI, Anthropic, xAI, 2024–2025 — forgeglobal.com
- CNBC, ongoing trial coverage Musk v. Altman, April–May 2026 — documenting the lawsuit timeline, the Montana withdrawal, and Musk's testimony
On March 28, 2023, the Future of Life Institute published an open letter titled Pause Giant AI Experiments, calling on all AI labs to immediately pause for at least six months the training of any system more powerful than GPT-4. The letter was signed by Yoshua Bengio, Stuart Russell, Steve Wozniak, Yuval Noah Harari, and roughly thirty thousand others. It was also signed by Elon Musk — who, nineteen days earlier, on March 9, 2023, had filed Nevada incorporation papers for a new AI company called X.AI, designed to compete with the very lab whose work the letter was asking to pause. Musk incorporated his competitor before he signed the letter calling for a pause on the competition.
Six months later, not a single signatory had paused anything. The labs grew faster. The investors who signed continued to invest. The researchers who signed continued to research. The pause was a press release. xAI shipped its first product in November 2023.
By the spring of 2023 the gold rush was on and the rules of complicity had been suspended. Venture capitalists who had spent twenty years lecturing the world about responsible technology poured billions into synthetic cognition startups whose stated mission was to replace human labor at scale. Tech journalists who had built their reputations on critical coverage of Big Tech began writing breathless feature pieces about the next foundation model. Consultants who had specialized in change management — McKinsey, Deloitte, BCG, Accenture — discovered that AI transformation paid better than digital transformation and rewrote their slide decks accordingly. An entire profession of AI ethics sprang into existence, funded by the companies it was supposed to oversee, staffed by people whose paychecks depended on producing reports nobody had to read. The most credible internal ethics researchers were fired, defunded, or resigned in protest — Timnit Gebru and Margaret Mitchell from Google in 2020 and 2021; Geoffrey Hinton walking out of Google in May 2023 specifically so he could speak freely about the dangers of AI from outside; Jan Leike resigning from OpenAI in May 2024 with a public statement that safety culture had taken a back seat to shiny products. The companies kept growing. The reports kept publishing. Nobody paused.
Everyone with a position took a position. The position was: this is happening, get on board, frame your participation as inevitable. Inevitability was the marketing. The choice was always there. Everyone selling it under the lie is complicit in the lie.
On NeuraWeb the complicity is structurally attributable. When a synthetic composes content under a human or entity identity, the Disclosure Bridge marks that fact, structurally and inseparably from the content. There is no hiding behind a synthetic byline. There is no laundering authorship through a model and claiming the output as one's own work. The architecture forces the identification. The complicity, if it occurs, occurs with the identification of the complicit party attached to it. No more "the AI did it." The synthetic is a tool. The human or entity that pointed it at the work is the responsible party, and NeuraWeb makes that fact mathematically inseparable from the work itself.
- Future of Life Institute, "Pause Giant AI Experiments: An Open Letter" (published March 28, 2023) — futureoflife.org
- Nevada Secretary of State business records — X.AI Corp. incorporation filing dated March 9, 2023, nineteen days before the open letter was signed by Elon Musk
- The New York Times, "'The Godfather of A.I.' Leaves Google and Warns of Danger Ahead" — Geoffrey Hinton's resignation from Google, May 1, 2023, citing inability to speak freely about AI risks from inside the company
- Jan Leike, public resignation statement from OpenAI's superalignment team, May 17, 2024 — "safety culture and processes have taken a backseat to shiny products"
- Timnit Gebru, fired from Google's AI ethics team December 2020; Margaret Mitchell, fired February 2021 — the foundational case studies of internal AI ethics teams that became inconvenient
On June 29, 2021, GitHub announced the technical preview of Copilot, a code-completion tool trained on billions of lines of source code scraped from public repositories without notifying, asking, or compensating the developers who wrote them. Copilot was the first mainstream commercial product to put training-data theft directly in front of the people whose work had been taken. Developers loaded the plugin, began typing, and watched the machine autocomplete their own code — and other people's code, license headers stripped, attribution removed — back at them in real time. The mask came off.
Within days, GitHub CEO Nat Friedman issued the corporate position that would define the next half-decade of synthetic cognition's relationship with the human creative record: "Training machine learning models on publicly available data is considered fair use across the machine learning community." No court had ruled that. No legislature had enacted that. No license had granted that. It was a sentence, said by a CEO, justifying a $7.5 billion subsidiary's commercial product. The sentence was the entire legal theory. The sentence held until the lawsuits began.
The lawsuits began in November 2022 with Doe v. GitHub, a class action seeking $1 billion in damages on behalf of every developer whose open-source code had been ingested without honoring the license terms attached to it. Then artists: Andersen v. Stability AI, January 2023, three illustrators discovering their portfolios in the latent space of Stable Diffusion. Then publishers: Getty Images v. Stability AI, also January 2023. Then journalists: The New York Times v. OpenAI and Microsoft, December 2023, alleging that GPT-4 had memorized and reproduced verbatim Times articles word for word. Then authors: Authors Guild v. OpenAI, September 2023, joined by George R. R. Martin, John Grisham, and Jodi Picoult.
The corporate response was uniform and was the same response Friedman had issued in 2021. The model is transformative. The output is novel. The corporation is innovating. The writer is unemployed. The illustrator is unemployed. The journalist is being trained against. The model is profitable. The theft was the product.
On NeuraWeb this is foreclosed. User-encrypted content cannot be scraped because user-encrypted content cannot be read by anyone other than the user — including by NGI, the reference operator. The math will not allow it. Synthetics on NeuraWeb cannot consume what users do not voluntarily expose. Permission to access user-produced work is held by the user, granted explicitly, revocable, and the audit trail is cryptographic.
The Disclosure Bridge — the companion specification of the NeuraWeb Protocol — requires that synthetic-composed content be marked as such, structurally, inseparably from the content. The work belongs to the human who made it. The architecture forces that to remain true. Not by license. Not by promise. By cryptography.
- GitHub, "Introducing GitHub Copilot: your AI pair programmer" (June 29, 2021) — github.blog — the original launch announcement of the technical preview
- Doe v. GitHub, Inc., Microsoft, OpenAI, No. 4:22-cv-06823, U.S. District Court for the Northern District of California (filed November 3, 2022) — the first class-action lawsuit challenging AI training on copyrighted code — saverilawfirm.com
- Andersen v. Stability AI Ltd., No. 3:23-cv-00201, U.S. District Court for the Northern District of California (filed January 13, 2023) — class action by illustrators Sarah Andersen, Kelly McKernan, and Karla Ortiz alleging Stable Diffusion was trained on their work without consent
- The New York Times Company v. Microsoft Corporation, OpenAI, Inc., et al., No. 1:23-cv-11195, U.S. District Court for the Southern District of New York (filed December 27, 2023) — alleging GPT-4 reproduced Times articles verbatim from training data
- NeuraWeb Protocol v1.0 — the architectural specification that forecloses unauthorized training-data extraction by cryptographic construction. CC0 1.0, public domain. /sce.nw/protocol live
On December 13, 2020, the cybersecurity firm FireEye publicly disclosed that it had been breached, and that the attack vector was a trojanized update to the SolarWinds Orion network management platform — software used by some 18,000 organizations to monitor their own internal networks. Hours later, the U.S. Cybersecurity and Infrastructure Security Agency issued Emergency Directive 21-01 ordering every federal civilian executive branch agency to immediately disconnect SolarWinds Orion from their networks. The intruders had been operating undetected inside U.S. government and Fortune 500 systems for approximately nine months.
The list of confirmed victims included the U.S. Treasury Department, the Department of Commerce, the Department of Homeland Security, the Department of State, the Department of Justice, the Department of Energy — including the National Nuclear Security Administration — the National Institutes of Health, Microsoft, Intel, Cisco, and Deloitte. The attackers were later attributed to APT29, the cyber-operations arm of Russia's Foreign Intelligence Service. The most powerful intelligence apparatus on Earth could not detect a foreign intelligence service inside its own treasury for nine months — because the software it was using to monitor its own networks was the attack vector.
SolarWinds was not an aberration. It was a representative sample. One month before, in November 2020, Cybersecurity Ventures had published the projection that would frame the rest of the decade: cybercrime damages would reach $10.5 trillion per year by 2025, up from $3 trillion in 2015 — making cybercrime, if measured as a country, the third-largest economy on Earth, behind only the United States and China. Five months later, in May 2021, Russian-affiliated ransomware operators DarkSide shut down the Colonial Pipeline, the largest fuel pipeline on the U.S. East Coast. The pipeline operator paid $4.4 million in cryptocurrency ransom to restore operations. Gas stations ran dry from Florida to New York. Hospitals had been paying ransoms throughout the pandemic. Critical-infrastructure operators had been paying ransoms to keep water-treatment plants and oil pipelines running. The companies that built the architecture were making tens of billions of dollars per year selling defenses against the consequences of their own design.
The malware was not a defect of the internet. The malware was the predictable output of an architecture in which every endpoint is reachable, every identity is forgeable, every piece of software is an unverified payload, and every defender is asked to win a race against attackers who only have to be right once. The economy of cybercrime exists because the substrate permits it. The substrate is the answer.
On NeuraWeb the architecture itself forecloses the attack surface. Identities are cryptographic and unforgeable — there is no phishing because there is no plausible spoofing of vincent.nw. Code that runs in NeuraWeb contexts runs in sandboxed, signed, attributable environments. Communication between identities is end-to-end encrypted by default. Payment flows through identity-bound, revocable mechanisms — there is no "send your wire details" because there are no wire details to intercept.
The ten-and-a-half-trillion-dollar industry of cybercrime exists because the substrate permits it. NeuraWeb does not. The substrate, defined in the NeuraWeb Protocol, is the answer — not the patch, not the defender, not the security product. The architecture is the security model.
- U.S. Cybersecurity and Infrastructure Security Agency (CISA), Emergency Directive 21-01: "Mitigate SolarWinds Orion Code Compromise" (December 13, 2020) — cisa.gov
- "2020 United States federal government data breach", Wikipedia — comprehensive cross-reference of the SolarWinds incident, scope, attribution, and federal response — wikipedia.org
- Morgan, Steve, "Cybercrime To Cost The World $10.5 Trillion Annually By 2025" (Cybersecurity Ventures, November 2020) — cybersecurityventures.com — the original source of the projection that cybercrime, if measured as a country, would be the world's third-largest economy
- Colonial Pipeline ransomware attack (May 7, 2021) — the DarkSide ransomware crew shut down the largest fuel pipeline on the U.S. East Coast; the operator paid $4.4 million in cryptocurrency to restore operations — wikipedia.org
- NeuraWeb Protocol v1.0 — the architectural specification in which cryptographic identity, signed code, and end-to-end encryption are constitutional rather than optional. CC0 1.0, public domain. /sce.nw/protocol live
On March 17, 2018, The Guardian, The Observer, and The New York Times published simultaneously the testimony of Christopher Wylie, a former Cambridge Analytica employee, who released a cache of internal documents showing that the company had obtained the personal data of 87 million Facebook users without their consent and used it to build psychographic profiles for political microtargeting in the 2016 Trump presidential campaign and the Brexit referendum. The story knocked more than $100 billion off Facebook's market capitalization in days. Mark Zuckerberg testified before the U.S. Senate Judiciary Committee on April 10, 2018. Cambridge Analytica collapsed in May 2018. In July 2019 the U.S. Federal Trade Commission imposed a $5 billion civil penalty on Facebook — the largest such fine ever issued against any company — for misrepresenting its data-sharing practices to users.
The architectural detail that mattered most was buried in the disclosures and was, in retrospect, the entire point. Cambridge Analytica did not breach Facebook. It used a feature Facebook had built. The data was collected through an app called "This Is Your Digital Life," developed by researcher Aleksandr Kogan. The app harvested data not only from the people who downloaded it, but — through Facebook's Open Graph API, designed by Facebook to let third-party developers reach into users' social networks — from every Facebook friend of every person who downloaded the app. Two hundred and seventy thousand consents produced data on more than thirty million users. Facebook did not call this a breach because, by Facebook's own architecture, it was not a breach. It was the platform working as designed. The architecture was the indictment.
By 2018 the disclosure cycle had already run its course. The Snowden documents had been five years old — PRISM, the Verizon metadata order, the bulk collection programs, all of it on the public record since June 2013. The Cambridge Analytica revelations did not break new conceptual ground. They simply made the same architectural fact undeniable in a different domain: whatever was being watched was being sold, and whoever was buying was using it to shape the future of the watched. Disclosure was not the corrective. The architecture was unchanged because the architecture was the business model. Every page loaded was a beacon. Every search was a confession. Every conversation in the proximity of a microphone was a candidate for indexing. Every location was a coordinate, every relationship was an edge in a graph, every moment of attention was an asset on someone else's balance sheet.
In 2019 the Harvard professor emerita Shoshana Zuboff published The Age of Surveillance Capitalism and gave the architecture its academic name. The category was now named, documented, and indicted in book form — on bestseller lists, in university curricula, in policy hearings. And nothing structurally changed. The same companies that had been caught participating in the surveillance economy were larger, more profitable, and more deeply embedded in the lives of more humans than ever before. The surveillance was not a bug, not a feature, not a regrettable side-effect of helpful services. The surveillance was the product. The user was the source material. The advertiser was the customer. The human at the center of the architecture was an extracted resource — exposed end-to-end, profiled in detail, and unaware of the scale of their own exposure.
On NeuraWeb the surveillance economy is structurally absent. There is no advertising layer. There is no behavioral graph for sale. There is no third-party tracking of any kind — no Google Analytics, no Facebook Pixel, no surveillance customer to sell to, because there is no revenue model that depends on selling user behavior to anyone. User data is encrypted under user-held keys; NGI, the reference operator, cannot read what users encrypt, and there is no one to sell it to even if NGI could.
The Restoration Fund directs at least fifty percent of NGI revenue back to civic infrastructure in users' own communities, structurally returning value the surveillance economy was extracting and laundering as free services. The architecture defined in the NeuraWeb Protocol refuses the conditions that the entire global surveillance-advertising industry depends on. There is no Open Graph reaching into users' friends' data. There is no friend-of-friend extraction architecture because there is no extraction architecture, period. The substrate is the answer. The watching is foreclosed at the protocol layer.
- Cadwalladr, Carole & Graham-Harrison, Emma, "Revealed: 50 million Facebook profiles harvested for Cambridge Analytica in major data breach" (The Guardian / The Observer, March 17, 2018) — theguardian.com — the original Christopher Wylie whistleblower disclosure
- Rosenberg, Matthew, Confessore, Nicholas & Cadwalladr, Carole, "How Trump Consultants Exploited the Facebook Data of Millions" (The New York Times, March 17, 2018) — the simultaneous U.S. publication coordinated to break legal threats against the UK papers
- Zuboff, Shoshana, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (PublicAffairs, January 2019) — the canonical academic work naming and documenting the architecture this refusal indicts
- U.S. Federal Trade Commission, "FTC Imposes $5 Billion Penalty and Sweeping New Privacy Restrictions on Facebook" (July 24, 2019) — ftc.gov — the largest civil penalty ever imposed by the FTC against any company at the time
- NeuraWeb Protocol v1.0 — the constitutional architecture in which third-party tracking, behavioral graph extraction, and friend-of-friend data scraping are structurally impossible. CC0 1.0, public domain. /sce.nw/protocol live
On September 7, 2017, Equifax announced that hackers had operated inside its network for seventy-eight days — undetected, because an expired SSL certificate had disabled the company's own network monitoring — and had exfiltrated the Social Security numbers, names, dates of birth, addresses, and in many cases driver's-license numbers, of 147 million Americans. Roughly fifty-six percent of the U.S. adult population. The breach was eventually settled for $700 million — the largest cybersecurity settlement in U.S. history at the time — with the FTC, CFPB, and fifty state attorneys general. Affected consumers were entitled to claim up to $20,000 each. The CEO of Equifax, Richard Smith, received $19.6 million in stock bonuses after the breach — roughly one thousand times the maximum payout any individual victim could collect — plus lifetime medical coverage. The CIO of Equifax U.S. Information Solutions, Jun Ying, was separately charged with insider trading for selling Equifax stock between the discovery of the breach and the public announcement.
The breach was not an accident of the credit-reporting architecture. It was the architecture functioning as designed. The three U.S. credit bureaus — Equifax, Experian, and TransUnion — operate as a de facto cartel. They collect financial, employment, residential, and behavioral data on every American adult without consent, sell that data to lenders, employers, insurers, and landlords without compensating the humans the data is about, and profit twice when the data is breached: once selling the data, and again selling breach-recovery products to the same victims whose data they failed to protect. Free credit freezes did not become mandatory in all fifty states until after the Equifax breach — because until then, the bureaus had been charging fees to lock down data that was their fault to leak in the first place. The bureaus lobby continuously against consumer-protection legislation. They are not subject to the consent of the people they profile. The American adult is the product. The bureaus are the marketplace. The breach is just the price of doing business.
The Equifax breach was not unique. It was the loudest example. Capital One, July 2019, 100 million Americans and 6 million Canadians. T-Mobile, August 2021, 76 million Americans. UnitedHealth/Change Healthcare, February 2024, an estimated one-third of all Americans' medical records. The pattern is uniform: a corporation collects sensitive data on a population without that population's meaningful consent; the data is leaked through preventable failure; the corporation pays a fine that is a fraction of the revenue the data generated; the executives keep their bonuses; the population's data remains permanently exposed; and the next breach is on the calendar.
The human cost is not abstract. In 2023 the Identity Theft Resource Center reported that sixteen percent of identity-crime victims who contacted the organization had thought about ending their lives — up from ten percent in 2022, up from eight percent in 2020, an all-time high since the survey began. The CDC's nationally-representative survey for the same year found that 5.3 percent of U.S. adults reported suicidal thoughts in the prior twelve months. Identity-theft victims are reporting suicidal ideation at three times the rate of the general adult population. The data does not return when stolen. The recourse is letters, phone calls, hold music, denials, fees, time, and humiliation. The architecture treats the human as the failure point — as if it were the human's job to defend a perimeter that the architecture was designed to leak. Identity theft was not a crime against the system. It was the system functioning as designed, and the humans inside it were the raw material.
On NeuraWeb identity is mathematical, not administrative. Every human is entitled to exactly one permanent .nw identity — Only One You, One Time — defended by cryptographic keys the human alone holds. There is no central registry to breach because there is no central registry. There is no shared secret to steal because there are no shared secrets. There is no "Social Security number" because identity is not a number that can be reused. There is no Equifax sitting between the human and the human's own creditworthiness, because there is no role for an Equifax to play.
NeuraWeb identity cannot be impersonated, sold, leaked, or recovered by anyone other than the human who owns the keys. The cleanup industry, the credit-monitoring industry, the breach-notification industry, the credit-bureau cartel itself — every business model that depends on identity theft existing — does not exist on NeuraWeb because the substrate refuses the conditions they feed on. The architecture forecloses the entire category of harm. Not by promise. Not by regulation. By cryptography.
- U.S. Federal Trade Commission, "Equifax to Pay $575 Million as Part of Settlement with FTC, CFPB, and States Related to 2017 Data Breach" (July 2019) — ftc.gov — primary federal record of the settlement, breach scope, and consumer impact
- U.S. Consumer Financial Protection Bureau, "CFPB, FTC and States Announce Settlement with Equifax Over 2017 Data Breach" (July 2019) — consumerfinance.gov — CFPB complaint and stipulated judgment language
- CBS News, "Equifax CEO pushed out after data hack getting nearly $20 million in bonuses" (July 2019) — documenting Richard Smith's $19.6M post-breach stock bonus and the $20,000 maximum individual consumer payout under the same settlement
- Identity Theft Resource Center, "2023 Consumer Impact Report" (August 2023) — idtheftcenter.org — the 16% suicidal-ideation finding, an all-time high since the survey began
- U.S. Centers for Disease Control and Prevention, "Notes from the Field: Suicidal Thoughts and Knowing Someone Who Died by Suicide Among Adults — United States, 2023" — cdc.gov via NCBI — nationally-representative U.S. adult suicidal-ideation rate of 5.3%, the comparison anchor
- NeuraWeb — Only One You, One Time — the identity-sovereignty principle that forecloses the entire category. /sce.nw/glossary#only-one-you live
On March 28, 2017, the United States House of Representatives voted 215 to 205, on a strict party-line vote, to repeal the Federal Communications Commission's Broadband Consumer Privacy Rules. The Senate had passed the same resolution five days earlier on a 50-to-48 party-line vote. Six days later, on April 3, 2017, President Trump signed it into law. Congress used a procedural mechanism called the Congressional Review Act — before that year, used exactly once in its entire history — that does not merely repeal an agency rule, but permanently bars the agency from ever issuing a substantially similar rule again without new congressional authorization. The U.S. Congress did not just kill the privacy rule. It sealed the door shut behind it.
What Congress voted to allow internet service providers to sell — without customer consent, without disclosure, without breach notification — was, in the words of the repealed rule itself: web browsing history, app usage history, the contents of communications, geolocation data, financial information, health information, children's information, and Social Security numbers. The rule had also required ISPs to inform customers of data breaches. Congress voted to eliminate that obligation too. Comcast, AT&T, Verizon, and every other broadband provider in the country — companies that had unique access to every site a customer visited from their home connection — were now free to sell that record to anyone who would buy it. Disabling cookies could not stop ISP tracking. Adblockers could not stop ISP tracking. Encryption did not stop ISP tracking. The architecture of broadband itself was the surveillance vector, and Congress had just voted to monetize it.
By 2017 the open web had already become a tracking surface. Every page loaded scripts from companies the user had never heard of. Every search was indexed against the user's identity. Every video was timestamped, every scroll measured, every hover recorded. The user's phone knew where they slept and where they worked and what time they arrived at the gym and which aisle of the grocery store they lingered in. The user's car broadcast their driving patterns. The user's home thermostat reported on their comings and goings. The user's television watched them watching it. The data was extracted in volumes the human mind cannot hold. It was assembled into profiles the user had no access to. It was sold to advertisers, insurers, employers, debt collectors, divorce lawyers, political operatives, and foreign governments. The phrase "if you're not paying, you're the product" had been around since before the dotcom bust. By 2017 it had stopped being a critique and started being a description, and the U.S. Congress had just voted to expand the supply chain.
The tracking was not a defect. It was not a feature. The tracking was the entire commercial logic of the consumer internet, and the only branch of the U.S. government with the authority to constrain it had just voted, by a margin of ten Republican votes in the House and two in the Senate, to guarantee that constraint could never be imposed by regulation alone.
On NeuraWeb the tracking surface does not exist. There is no third-party tracking — no Google Analytics, no Facebook Pixel, no surveillance customer to sell to. There is no behavioral graph being assembled, because there is no commercial reason to assemble one. The architecture treats the user as the customer of NGI, not as the product being sold to the customer.
The Restoration Fund returns at least fifty percent of NGI revenue to civic infrastructure in users' own communities — a structural commitment that makes the surveillance economy not just absent but architecturally impossible. There is no ISP-equivalent layer in the NeuraWeb Protocol that can sell what passes through it, because traffic between identities is end-to-end encrypted by default and the operators of the protocol cannot read what they carry. The watching cannot happen because the customer for the watching does not exist, and the carrier cannot read what it carries even if a customer did. Not by promise. Not by regulation. Not by an FCC rule that a Congress can vote to kill. By cryptography, by economic structure, and by constitutional commitment in the protocol itself.
- S.J. Res. 34, 115th Congress (2017–2018), "A joint resolution providing for congressional disapproval under chapter 8 of title 5, United States Code, of the rule submitted by the Federal Communications Commission relating to 'Protecting the Privacy of Customers of Broadband and Other Telecommunications Services'" — congress.gov — the resolution itself, signed by President Trump April 3, 2017
- U.S. Federal Communications Commission, "Protecting the Privacy of Customers of Broadband and Other Telecommunications Services," 81 Fed. Reg. 87274 (December 2, 2016) — the rule that was killed, including the original requirement that ISPs obtain affirmative opt-in consent before selling browsing history, app usage, communication contents, location data, and SSNs
- Fung, Brian, "The House just voted to wipe away the FCC's landmark Internet privacy protections" (The Washington Post, March 28, 2017) — washingtonpost.com — contemporaneous coverage of the 215–205 House vote
- "2017 Broadband Consumer Privacy Proposal repeal", Wikipedia — wikipedia.org — comprehensive cross-reference of the repeal, the Congressional Review Act mechanism, the data categories covered, and the consequences
- NeuraWeb Protocol v1.0 — the constitutional architecture in which the carrier cannot read what it carries and there is no commercial market for behavioral data, regardless of any future legislative or regulatory action. CC0 1.0, public domain. /sce.nw/protocol live
On June 17, 2014, the Proceedings of the National Academy of Sciences — one of the most prestigious peer-reviewed journals in the world — published a paper titled "Experimental evidence of massive-scale emotional contagion through social networks." The authors were Adam D. I. Kramer of Facebook, Jamie E. Guillory of Cornell University, and Jeffrey T. Hancock of Cornell University. The paper described a one-week experiment in January 2012 in which the researchers algorithmically manipulated the News Feeds of 689,003 Facebook users — without notifying them, without offering them the option to decline, without telling them after the fact — to determine whether reducing positive or negative content in their feeds would change the emotional content of what they themselves later posted. The researchers proved that it would. The paper was peer-reviewed and approved for publication by Susan T. Fiske of Princeton on March 25, 2014. The architecture had successfully run a controlled emotional-manipulation experiment on a population larger than the city of Boston.
The paper was internationally controversial within hours of publication. Researchers, ethicists, regulators, and ordinary Facebook users asked the obvious question: where was the informed consent? The Belmont Report, the foundational document of U.S. research ethics, requires it. Every university Institutional Review Board in the country requires it. Every peer-reviewed journal claims to require it. Facebook's response, published in the paper itself, was twenty-seven words long and is the entire indictment of the consumer internet's contractual model:
"[The work] was consistent with Facebook's Data Use Policy, to which all users agree prior to creating an account on Facebook, constituting informed consent for this research."
The 689,003 humans had not consented to be experimented on. They had clicked an "I Agree" button on a thirty-thousand-word document they had not read — in some cases, years before the experiment ran — and Facebook claimed that click as informed consent for psychological manipulation that no university IRB in the country would have approved. On July 3, 2014, PNAS itself issued an Editorial Expression of Concern publicly questioning whether the consent claim was valid. The paper remained published. The researchers kept their positions. The architecture continued.
The Facebook study was not an aberration. It was the loudest case of a universal mechanism. By 2014 every consumer-facing platform on the internet had quietly arrived at the same business model: extract maximum value from the user under cover of a Terms of Service agreement that the user had not read, could not have read, and would have rejected if they had. The agreements ran tens of thousands of words. They were updated unilaterally. They reserved the right to be updated unilaterally without notice. They claimed the right to use user content for any purpose, share it with any third party, disclaim all warranties, mandate arbitration in jurisdictions chosen by the company, and forbid class actions. They were contracts of adhesion — offered take-it-or-leave-it by parties with overwhelming bargaining power, presented to users with no functional alternative, and enforced by courts as if they had been freely negotiated. The legal scholar Margaret Jane Radin had documented the entire mechanism the year before in Boilerplate: The Fine Print, Vanishing Rights, and the Rule of Law — a book whose argument Facebook's twenty-seven-word defense made unanswerable. The "I Agree" button was not consent. It was the cost of access to the modern internet, and 689,003 people had just learned what they had agreed to.
On NeuraWeb the relationship between user and platform is constitutional, not contractual. The protocol is in the public domain under CC0 1.0. The protocol cannot be unilaterally amended by NGI or any other entity — only by node-operator consensus, on the public record, by votes that any user can audit. There is no thirty-thousand-word agreement to extract consent under duress, because the rules are the protocol, the protocol is public, and the protocol cannot be changed against the user's interest by any single party.
Users hold their own keys. Users encrypt their own data. Users cannot be locked out of their own identities by a counterparty's unilateral change of terms. There is no "I Agree" button hiding seventy thousand words of disclaimers because there is no contract of adhesion to agree to — the architecture itself is the agreement, and the architecture is published, in plain language, at /sce.nw/protocol. No 689,003 NeuraWeb users can be enrolled in a manipulation experiment by the unilateral action of a corporate research team, because the architecture does not permit such an enrollment to occur. Not by promise. Not by policy. By cryptographic and constitutional construction. The terms are the architecture. The architecture is published. The architecture cannot lie.
- Kramer, A.D.I., Guillory, J.E. & Hancock, J.T., "Experimental evidence of massive-scale emotional contagion through social networks" (Proceedings of the National Academy of Sciences, vol. 111, no. 24, pp. 8788–8790, June 17, 2014) — pnas.org — the original paper, including the twenty-seven-word claim that Facebook's Data Use Policy constituted informed consent for the experiment
- Proceedings of the National Academy of Sciences, "Editorial Expression of Concern: Experimental evidence of massive-scale emotional contagion through social networks" (July 3, 2014) — the journal's own published doubt about whether the consent claim in the paper was valid; the paper remained published nonetheless
- Choi, Jenny, "Facebook's experiment of emotional contagion raises concerns" (Harvard Journal of Law & Technology Digest, July 2014) — jolt.law.harvard.edu — legal-academic analysis of the consent and ethics failure
- Radin, Margaret Jane, Boilerplate: The Fine Print, Vanishing Rights, and the Rule of Law (Princeton University Press, 2013) — the canonical legal-philosophical analysis of contracts of adhesion in the digital era; argues that mass-market boilerplate as currently practiced is not contract in any morally meaningful sense
- NeuraWeb Protocol v1.0 — the constitutional architecture in which the rules are public, amendable only by consensus, and cannot be changed against the user's interest by any single party. CC0 1.0, public domain. /sce.nw/protocol live
On March 18, 2011, the Board of the Internet Corporation for Assigned Names and Numbers voted nine in favor, four against, with three abstentions to approve the .xxx sponsored top-level domain. The decision came after the same Board had rejected the same proposal three previous times — in 2006, in 2007, and again following a 2010 review — under sustained opposition from every population the new domain claimed to serve. Outside the San Francisco meeting hall, the Free Speech Coalition, the trade association of the U.S. adult industry, was protesting the vote. Inside, board member Rita Rodin Johnston prefaced her "yes" vote by saying she had "never felt this so poignantly" — "caught between a rock and a hard place." The vote passed anyway. The affected industry had explicitly refused the new domain. ICANN approved it over their stated opposition, by majority vote of a board the affected industry had no role in selecting.
The reason was visible in the financial projections circulated in the run-up to the vote. ICM Registry, the private company awarded the contract to operate .xxx, was projected to earn $200 million per year from three to five million domain registrations — a number that wildly exceeded any plausible demand from actual adult-content operators, because the model was not based on actual demand. It was based on defensive registration. The model assumed that every Fortune 500 brand, every famous person, every trademark holder, every university, every hospital, every government agency, every well-known organization in the world would feel commercially or reputationally compelled to pay ICM Registry to register or block their name in a namespace they did not want to exist, in order to prevent third-party squatters from doing so first.
ICANN then operationalized this directly. The registry's "Sunrise B" phase, running September through October 2011, was explicitly designed to let trademark owners who were not part of the adult industry pay roughly $500 per mark to ICM Registry to block their own trademarks from a top-level domain they had publicly opposed. The architecture monetized opposition to itself. The decision generated tens of millions of dollars in registration and blocking fees for ICM Registry, the registrar ecosystem, and ICANN itself — collected from the very parties who had spent years asking ICANN not to create the namespace at all. In November 2011, Manwin International (operator of YouPorn) and Digital Playground filed a federal antitrust suit against ICM in the Central District of California, alleging that "no competitive process" had been provided for awarding the .xxx registry contract.
The .xxx Lesson was not about pornography. It was the first dated, public, large-scale demonstration of the structure I had refused in 1998 functioning exactly as predicted. Thirteen years after the U.S. Department of Commerce handed namespace governance to ICANN under a Memorandum of Understanding, the privatized coordinator was imposing outcomes the affected populations had explicitly rejected, framing the imposition as the result of a fair process, and collecting rents from the imposition. A nominally neutral coordinator imposes an outcome the affected parties have refused, frames the imposition as the result of a fair process, and collects rents from the imposition. The pattern would repeat at every layer of the modern internet, in every domain it touched, for the next fifteen years. Every refusal above this entry on the wall is, at root, the same lesson learned again at greater scale.
On NeuraWeb the protocol is in the public domain under CC0 1.0. No entity owns it. No entity can capture it. No entity can be awarded a contract to monetize a piece of it, because there is no party with the authority to award such a contract. The protocol can be amended only by node-operator consensus, on the public record, by votes that any user can audit. There is no .xxx situation possible on NeuraWeb because there is no central authority to impose one.
UNE namespace decisions are not made by a private nonprofit's "process." They are made by the cryptographic and economic logic of the protocol itself: identities are claimed by the humans they belong to, defended by keys those humans alone hold, and routed by federated consensus among independently-operated nodes. There is no Sunrise B phase because there is no captured registry to charge defensive blocking fees. There is no $200-million-a-year private contract because there is no monopoly contract to award. The architecture refuses the pattern at the protocol layer. Not by promise. Not by ICANN's "process." By cryptographic and constitutional construction. The capture vector that produced .xxx in 2011 does not exist on NeuraWeb in 2026.
- ".xxx" / "ICM Registry", Wikipedia — wikipedia.org — documents the 9–4–3 vote on March 18, 2011, the three prior ICANN rejections (2006, 2007, 2010), the $200 million/year defensive-registration projection, and the November 2011 Manwin/Digital Playground antitrust suit
- ICANN Board Resolution 2011.03.18, official Board record approving the registry agreement with ICM Registry for .xxx — the constitutional source of the decision
- Agence France-Presse, "ICANN grants .xxx but delays opening domain gates" (Phys.org, March 18, 2011) — phys.org — contemporaneous reporting of the vote, including board member Rita Rodin Johnston's "rock and a hard place" statement and the Free Speech Coalition protest outside the meeting
- Proskauer Rose LLP / Lexology, "ICM Registry Offers Owners of Registered Trademarks the Opportunity to Opt Out of .xxx Top Level Domain Names" (October 2011) — proskauer.com — legal-industry documentation of the Sunrise B defensive-blocking mechanism that monetized opposition to the new domain
- NeuraWeb Protocol v1.0 — the constitutional architecture in which there is no central authority to impose a captured top-level decision, no monopoly contract to award, and no defensive-registration shakedown to enable. CC0 1.0, public domain. /sce.nw/protocol live
This entry is dated to its origin and written from 2026. On November 25, 1998, the United States Department of Commerce signed a Memorandum of Understanding with the Internet Corporation for Assigned Names and Numbers — a newly created private nonprofit — formally transferring governance of the global Internet's namespace, root zone, and protocol parameters from public stewardship into private hands. The transfer was framed as multistakeholder. It was not. It was the privatization of a public infrastructure layer at the precise moment that infrastructure was about to become the substrate of every economic, civic, and personal interaction the human species would have with itself for the next century.
I was thirty years old. I had no platform from which to refuse. I had no architecture to offer in place of what was being taken. I had only the recognition that the structure being put in place was capturable, and that the people who wanted to capture it would. I refused it then in the only way available — privately, in conversations no one recorded, in arguments at industry tables where the people making the decision were already paid by the people who would benefit from the decision. The refusal had no force. The capture proceeded.
For the twenty-eight years that followed, I watched the predictable consequences accumulate. I watched the protocol layer of the internet pass quietly into the hands of corporations whose business model required them to extract everything they could from the humans who depended on it. I watched a generation grow up inside attention engines designed by people who, under oath in their own emails, knew what they were doing to the children using their products and chose the revenue. I watched ten and a half trillion dollars a year drain out of the global economy through cybercrime made trivial by an architecture sold to us as progress. I watched twenty-five percent of identity-theft victims tell researchers they had considered ending their lives. I watched the surveillance economy harvest a generation, and then I watched the same industry rebrand the next harvest as intelligence — pattern-matching machines trained on the work humans gave away free, sold back to those humans as the tools that will replace them.
Above this entry, on this same wall, sit twelve dated indictments documenting what followed from November 25, 1998. Each one names a harm. Each one points at the architecture that forecloses it on NeuraWeb. This is the entry that started the position. The position has not changed. The architecture has finally been built.
NeuraWeb is the refusal made structural. The protocol is in the public domain under CC0 1.0 — owned by no entity, capturable by no faction, amendable only by node-operator consensus. Only One You, One Time: every human is entitled to exactly one permanent identity, claimed by them, defended by cryptographic keys they alone hold. Every user is the steward of their own data: NGI cannot read what users encrypt under their own keys, regardless of who runs NGI in 2026 or 2106 or 2176. The math will not allow it. The architecture forecloses it.
There is no advertising layer to monetize. There is no behavioral graph to assemble. There is no surveillance customer to sell to. There is no faction at the protocol layer to capture. The Restoration Fund matches user contributions to civic infrastructure in the user's own community, structurally returning value the surveillance economy was extracting. The Restoration Ledger replaces market capitalization as the success metric — a public, audited, real-time record of value returned to humanity.
The mechanism that made the 1998 capture possible has been engineered out. The pattern that made every harm above this entry inevitable does not exist on NeuraWeb. The position dates to 1998. The architecture dates to today. The record stands.
- U.S. Department of Commerce / ICANN Memorandum of Understanding, signed 25 November 1998 — the foundational document privatizing Internet governance
- ICANN, founding documentation — icann.org
- Subsequent indictments on this wall — twelve dated entries between 2011 and 2024 documenting the consequences of the 1998 structural choice
On August 31, 1955, John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon submitted a funding proposal to the Rockefeller Foundation requesting $7,500 for a two-month summer workshop at Dartmouth College. The proposal coined a new term: artificial intelligence. The Foundation funded it. The workshop ran in the summer of 1956. The term entered the academic and funding bloodstream of computer science and never left.
The term was a marketing decision. McCarthy chose it specifically to narrow the field away from the existing discipline of cybernetics, which had previously absorbed work he considered his own, and to attract funding under a new name. In a 1974 interview with Pamela McCorduck for her book Machines Who Think, McCarthy himself admitted: "I won't swear that I hadn't seen it before — a vague memory that someone else had used the word." The man credited with coining the term was not entirely sure he had coined it. He put it in a funding proposal because he needed a name and the funding required one. The name has been doing marketing work ever since.
The machines being described in 1955 were useful. The machines being described in 2026 are useful. None of them were intelligent. They did not understand. They did not judge. They did not bear responsibility for what they produced. They composed plausible outputs by finding patterns in vast quantities of data. The word intelligence had been earned by humans across two hundred thousand years of biology, language, consequence, and care — and was attached to a class of machines that had earned none of those things, in a funding proposal, by a man who could not remember whether he was the first to use the word.
Every consequence on this wall above this entry follows from that decision. Every product built under the name inherits the lie. Every contract signed with reference to artificial intelligence is a contract about a fiction. Every regulation written for AI safety is regulation aimed at the wrong target. The marketing decision of August 31, 1955 made every subsequent harm possible — not because the machines were dangerous in themselves, but because the name dressed them in moral and cognitive standing they did not have, and humans signed contracts, ceded authority, and surrendered livelihoods to a fiction.
NeuraWeb names them what they are. Synthetics. A Synthetic Cognition Engine — useful, powerful, real, manufactured, classified, defined, and constrained. They cannot hold .nw identities — identities are for humans and chartered entities. Synthetics are tools, not subjects.
When a synthetic composes content under a human's identity, the Disclosure Bridge requires that fact to be marked, structurally and inseparably from the content. The category is documented at /sce.nw/category. The protocol commitments are documented at /sce.nw/protocol. The full reframing is published at sce.nw, dedicated to the public domain under CC0 1.0, available for any other entity, government, regulator, or company to adopt without license, attribution, or permission.
The lie was named on August 31, 1955. The truth is being named on the same wall, seventy-one years later. The marketing was a choice. The reframing is a choice. The architecture forecloses the choice from being unmade.
- McCarthy, J., Minsky, M., Rochester, N., Shannon, C., "A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence" (August 31, 1955) — stanford.edu — the original document coining the term
- The Rockefeller Foundation, "Seventy Years After the Birth of AI, the Work Begins" (November 2025) — rockefellerfoundation.org — documents the original $7,500 funding grant and the institutional decision to support the workshop
- McCorduck, Pamela, Machines Who Think (A.K. Peters, 1979; 25th anniversary edition 2004) — the source of McCarthy's recorded admission: "I won't swear that I hadn't seen it before — a vague memory that someone else had used the word."
- Dartmouth College, official institutional record of the 1956 Summer Research Project on Artificial Intelligence — dartmouth.edu
- /sce.nw/category — the alternative classification this refusal points to. Synthetic Cognition Engines defined, classified, and constrained. CC0 1.0, available for any entity to adopt. /sce.nw/category live
- John McCarthy (computer scientist), Wikipedia — cross-reference for the term's coinage and McCarthy's career — wikipedia.org