HeadlinesBriefing favicon HeadlinesBriefing.com

AI‑Powered Scams Surge: From ChatGPT to Anthropic’s Mythos

MIT Technology Review AI •
×

When OpenAI released ChatGPT in late 2022, it proved that generative AI could spin convincing text from a prompt. Criminals seized the opportunity, turning large‑language models into weapons for mass phishing, deepfakes and malware tweaks. The result is a flood of automated scams that hit victims faster and cheaper.

Southeast Asian scam hubs now deploy inexpensive AI to target thousands of inboxes, while the UAE reports thwarting AI‑backed attacks on critical infrastructure. The scale means even blunt, low‑skill attacks can succeed if a system or user slips through. Organizations already struggle to absorb the deluge, and the problem is poised to worsen as models ever‑growing today.

Anthropic’s Mythos model exposed thousands of critical OS and browser flaws, prompting the firm to delay release and launch Project Glasswing—a consortium aimed at defensive AI. Meanwhile, Microsoft processes over 100 trillion signals monthly, blocking $4 billion of fraud between April 2024 and April 2025. These dual uses of AI underscore its double‑edged nature.

Defenders emphasize that basic updates and network hygiene still mitigate many attacks, but sophisticated, AI‑driven threats loom larger. The industry’s response hinges on balancing rapid innovation with security rigor. Until defenses mature, attackers will exploit AI’s low entry cost, turning every prompt into a potential phishing vector for financial losses across organizations today and beyond everywhere in the world.