HeadlinesBriefing favicon HeadlinesBriefing.com

OpenAI sued over failure to police violent ChatGPT user

Ars Technica •
×

OpenAI faces seven California lawsuits after a Canadian school shooting. Lawyers claim the company ignored safety‑team warnings that a ChatGPT user posed a real‑world gun‑violence risk. The user had been flagged months before the Tumbler Ridge massacre, yet OpenAI chose to deactivate, not police‑report, the account.

Sam Altman, the CEO, later issued a public apology and pledged to strengthen safeguards. Critics say the apology arrived too late and that OpenAI’s response was motivated by a looming $852 billion IPO. Families argue the firm deliberately concealed violent users to protect Altman’s reputation and the company’s valuation.

The lawsuits allege that OpenAI’s safety team, which had identified the shooter’s account as a credible threat, was overruled by senior executives. Instead of reporting the user to police—who already had a file and had removed guns from the shooter’s home—OpenAI merely deactivated the account and later facilitated re‑registration, allowing continued planning.

If the claims hold, the case could force AI firms to adopt stricter reporting protocols for violent content. For consumers, it signals that platforms like ChatGPT may expose users to unforeseen risks unless transparency and accountability improve. The outcome will shape regulatory expectations for AI safety worldwide.