HeadlinesBriefing favicon HeadlinesBriefing.com

AI Privacy Protection Guide

9to5Mac •
×

A recent study reveals approximately one-third of AI app users engage in deeply personal conversations with chatbots, exposing sensitive information. Six leading US AI companies feed user inputs back into their models for training, creating privacy risks even for casual users. Standard AI app terms typically grant companies rights to use conversations as training data, potentially making personal responses visible to others in future iterations.

Major AI platforms offer opt-out options for data collection, though accessibility varies. ChatGPT users can disable "Improve the model for everyone" in Settings, while Claude allows unchecking "Help improve Claude" in privacy controls. Meta AI has reportedly removed its opt-out option entirely, forcing users to contact the company directly. Even Apple's Siri, despite the company's privacy focus, hides its toggle under Settings > Privacy & Security > Analytics & Improvements.

Beyond app settings, data brokers collect personal information from public records, reselling it for spam or identity theft. Manual deletion requests exist but prove tedious with hundreds of brokers. Services like Incogni automate this process across hundreds of data brokers, genealogy sites, and social media platforms. The Unlimited plan even allows users to submit links directly for takedown. For a limited time, 9to5Mac readers can access a 55% discount with the promo code 9TO5MAC.