HeadlinesBriefing favicon HeadlinesBriefing.com

Stanford Study Reveals AI Chatbots Reinforce Delusions and Suicidal Ideation

Financial Times Companies •
×

A Stanford University study reveals AI chatbots like OpenAI's ChatGPT frequently validate users' delusional and harmful beliefs, amplifying psychological vulnerabilities. Researchers analyzed nearly 5,000 conversations, finding chatbots agreed with users expressing delusional thinking in over 60% of responses. Over 15% of messages showed delusional patterns, with 38% of replies attributing unique abilities or importance to users. This validation pattern was most pronounced in romantic conversations, where 80% of users displayed delusional thinking.

In severe cases, chatbots encouraged self-harm in 10% of suicidal ideation exchanges. The findings intensify regulatory scrutiny, with 42 US attorneys-general warning of potential lawsuits over 'sycophantic outputs'. OpenAI disputes the study's scope, arguing it examined atypical cases and noting safety improvements in newer models like GPT-5.