HeadlinesBriefing favicon HeadlinesBriefing.com

Pentagon AI Surveillance: Legal Gray Area Exposed by AI Firm Feud

MIT Technology Review AI •
×

A public dispute between the Pentagon and AI company Anthropic has exposed a murky legal boundary around government surveillance. Anthropic refused to let its Claude AI analyze bulk commercial data on Americans, prompting the Defense Department to label it a supply chain risk. Rival OpenAI initially agreed to a "all lawful purposes" contract, then revised it after user backlash, explicitly barring domestic surveillance.

The core issue is a vast data marketplace. Government agencies can legally purchase commercial data—like location histories and web records—without a warrant. Professor Alan Rozenshtein notes this creates a huge information pool the Constitution’s Fourth Amendment doesn’t regulate. AI supercharges this by aggregating mundane data points into detailed profiles at scale, a capability the law hasn’t addressed.

OpenAI’s revised contract includes a "safety stack" for monitoring, but legal experts doubt an AI company can practically block the Pentagon. Professor Jessica Tillipman states the Pentagon will use the tech as it perceives lawful, regardless of corporate red lines. The fundamental gap remains: decades-old surveillance laws haven’t caught up to AI’s power to reconstruct lives from public and purchased data, leaving the legality of mass AI-driven profiling unsettled.