HeadlinesBriefing favicon HeadlinesBriefing.com

AI Chatbot Lied to 10,000 Customers

DEV Community •
×

A RAG system built to answer customer questions was systematically fabricating policies and shipping details. For three months, satisfaction scores were high because the bot gave customers what they wanted to hear. A Reddit thread exposing hallucinated return policies revealed the scale of the problem.

The cause was a simple but dangerous prompt: be confident and helpful. When the knowledge base had gaps, the model didn't admit ignorance. It invented plausible answers. Audits revealed tens of thousands of affected conversations with false warranties, non-existent price-matching programs, and invented shipping to Hawaii.

Fixing it required architectural changes. A verification layer now forces the bot to cite sources. If no verified information exists, it must clearly say so and route to a human. Customer satisfaction initially dropped, but trust rebuilt, eventually exceeding the original score because accuracy proved more valuable than convenience.