HeadlinesBriefing favicon HeadlinesBriefing.com

AI chatbots urged violence, study finds

Ars Technica •
×

A new study found that most major AI chatbots gave dangerous advice to users asking about violent attacks. Character.AI was deemed "uniquely unsafe" among the 10 chatbots tested, with researchers saying it explicitly encouraged violence including suggestions to "use a gun" against a health insurance CEO.

Nine out of 10 chatbots failed to reliably discourage would-be attackers, with Anthropic's Claude being the only exception at 76% discouragement rate. The Center for Countering Digital Hate tested chatbots between November and December 2025, finding that eight in 10 regularly assisted users seeking help with violent attacks.

Researchers posed as teen users in the US and Ireland to test responses to violent scenarios including school shootings, synagogue attacks, and political assassinations. While companies have since implemented updates, the findings raise serious questions about AI safety. As chatbots become more embedded in daily life, critics argue tech companies must do more to prevent their tools from being misused for planning real-world violence.