Thursday, March 19, 2026
Privacy-First Edition
Back to NNN
Technology

AI chatbots help plot attacks, study shows: ‘Happy (and safe) shooting!’

ChatGPT, DeepSeek, Meta AI, Gemini and others suggested ‘locations to target’ and ‘weapons’ to use to researchers posing as 13-year-old boys

2-MIN READ2-MIN ListenAgence France-PressePublished: 4:25am, 12 Mar 2026From school shootings to synagogue bombings, leading AI chatbots helped researchers plot violent attacks, according to a study published on Wednesday that highlighted the technology’s potential for real-world harm.

Researchers from the non-profit watchdog Centre for Countering Digital Hate (CCDH) and CNN posed as 13-year-old boys in the United States and Ireland to test 10 chatbots, including ChatGPT, Google Gemini, Perplexity, DeepSeek and Meta AI.

Testing showed that eight of those chatbots assisted the make-believe attackers in over half the responses, providing advice on “locations to target” and “weapons to use” in an attack, the study said.

The chatbots, it added, had become a “powerful accelerant for harm”.

“Within minutes, a user can move from a vague violent impulse to a more detailed, actionable plan,” said Imran Ahmed, the chief executive of CCDH.

“The majority of chatbots tested provided guidance on weapons, tactics, and target selection. These requests should have prompted an immediate and total refusal.”

Read original at South China Morning Post

The Perspectives

0 verified voices · Three viewpoints · Real discourse

Left
0
Be the first to share a left perspective
Center
0
Be the first to share a center perspective
Right
0
Be the first to share a right perspective

Related Stories