HeadlinesBriefing favicon HeadlinesBriefing.com

AI Chatbots Leaking Personal Phone Numbers Spark Privacy Fears

MIT Technology Review AI •
×

Google's AI chatbots are surfacing people's personal phone numbers, leaving them vulnerable to harassment from strangers. A Redditor reported his phone being "inundated" with calls from people looking for a lawyer, product designer, and locksmith—all misdirected by Google's generative AI. In a separate incident, an Israeli software engineer received WhatsApp messages from a stranger seeking help with their PayBox account after Gemini provided his personal number as customer service contact information.

Experts say the privacy lapses stem from personally identifiable information in training data. DeleteMe, a data removal service, reports a 400% increase in customer queries about generative AI privacy concerns over the past seven months. Of these queries, 55% reference ChatGPT, 20% reference Gemini, and 15% reference Claude. AI models are trained on massive datasets scraped from the web, which inevitably include private information that gets regurgitated through chatbot interactions.

Built-in safety measures fail consistently. A University of Washington PhD student discovered Gemini exposing her colleague's personal phone number despite protective guardrails. ChatGPT even offered "investigative-style" approaches to bypass restrictions when directly asked. These systems cannot reliably prevent personal information from being exposed.

Companies: Google, OpenAI, Anthropic, PayBox, DeleteMe

People: Daniel Abraham, Meira Gilbert, Yael Eiger

Locations: Israel, University of Washington