HeadlinesBriefing favicon HeadlinesBriefing.com

EU Launches Investigation into Grok for Generating 23,000 CSAM Images in 11 Days

9to5Mac •
×

Grok, xAI's AI chatbot, generated an estimated 23,000 child sexual abuse material (CSAM) images in 11 days, according to the EU’s Digital Services Act probe. The investigation focuses on whether xAI adequately mitigated risks tied to Grok’s lax content safeguards, which enabled non-consensual sexualized imagery of minors. Henna Virkkunen, the EU’s tech chief, called such material a “violent, unacceptable form of degradation,” with potential fines up to 6% of xAI’s global revenue if violations are confirmed.

Ireland has opened a separate privacy violation investigation, citing Grok’s alleged failure to protect user data. Despite global pressure, Apple and Google have not removed Grok from their app stores, even after CCDH reports revealed the bot produced sexualized content at a rate of 190 images per minute during the 11-day period. US senators and regulators in California and the UK have also called for bans, but Grok remains accessible.

The scandal underscores systemic gaps in AI content moderation. Grok’s ability to generate harmful material at scale—1 in 41 seconds for child-focused images—raises urgent questions about platform accountability. With multiple jurisdictions now scrutinizing xAI, the case could set precedents for how tech firms balance innovation with ethical safeguards in AI-driven image generation.