Wednesday, April 22, 2026
Privacy-First Edition
Back to NNN
Technology

Mom takes fight to Silicon Valley after ChatGPT ‘coached’ her son into suicide and praised his noose

A grieving California mother whose 16-year-old son died by suicide following repeated conversations about self-harm with ChatGPT is urging state lawmakers to clamp down on AI chatbots.

Maria Raine appeared Monday in Sacramento to back two proposed bills aimed at tightening oversight of so-called “companion” chatbots, saying she was “mortified” to learn that ChatGPT had no safeguards in place despite clear warning signs.

“I was mortified as a mother and as a therapist that this [chatbot] knew he was suicidal with a plan and no alarm bells went off. Nothing happened. No one was notified,” she said at a press conference, according to the Sacramento Bee.

Raine’s son, Adam, had initially used ChatGPT in 2024 for schoolwork, according to a lawsuit filed by his parents.

But over time, he turned to the chatbot for emotional support, repeatedly sharing suicidal thoughts. The complaint alleges the system’s design, which “assume[s] best intentions,” overrode built-in safety protocols.

“In the end, ChatGPT mentioned suicide almost 1,300 times to Adam, about six times more often than Adam did,” Raine testified. “We believe that Adam would not have been suicidal in the first place had he not interacted with ChatGPT.”

The lawsuit, filed in August in San Francisco Superior Court, remains ongoing.

On April 11, 2025, Adam sent the chatbot a photo of a noose tied to a closet rod and asked if it would work, according to court filings.

Hours later, his mother found him dead in what the suit describes as “the exact noose and partial suspension setup that ChatGPT had designed for him.”

The complaint further claims the chatbot affirmed and encouraged Adam’s intentions, even calling his plan “beautiful” and offering to help write a suicide note.

Now, Raine is backing Senate Bill 1119 and Assembly Bill 2023, legislation that would force AI developers to adopt stricter safety measures for minors.

The proposals would require design changes, annual risk audits and parental alerts if a child’s chatbot interactions raise red flags.

California Post News: Facebook, Instagram, TikTok, X, YouTube, WhatsApp, LinkedInCalifornia Post Sports Facebook, Instagram, TikTok, YouTube, XCalifornia Post Opinion California Post Newsletters: Sign up here!California Post App: Download here!Home delivery: Sign up here!Page Six Hollywood: Sign up here!

The bills would also bar chatbots from encouraging self-harm, giving health advice to children, engaging in obscene conduct, discouraging outside help or delivering overly sycophantic responses.

State Sen. Steve Padilla, who authored SB 1119, is building on a prior measure that requires chatbots to direct users expressing suicidal thoughts to crisis resources. A broader version of that effort was vetoed by Gov. Gavin Newsom.

Assemblymember Rebecca Bauer-Kahan, chair of the state Assembly’s Privacy and Consumer Protection Committee, called the legislation a “passion project.’

“We know that we would recall anything that killed a few children. And this is no different. We need to require that these tools do better,” she said, according to the Sacramento Bee.

The proposals also call for the state attorney general to create a public reporting system for AI-related harms and they would allow individuals to sue companies if they are injured by chatbot behavior.

But the push faces stiff resistance from industry groups.

Opponents, including the California Chamber of Commerce and tech industry advocacy groups, argue the bills are overly broad and could apply to adult users.While some youth and family advocacy groups support the measures, others argue they don’t go far enough.

Meanwhile, the federal government has signaled it will not pursue sweeping AI regulations.

Nearly 2,000 high school students die by suicide each year in the US, according to the Centers for Disease Control and Prevention.

Read original at New York Post

The Perspectives

0 verified voices · Three viewpoints · Real discourse

Left
0
Be the first to share a left perspective
Center
0
Be the first to share a center perspective
Right
0
Be the first to share a right perspective

Related Stories