HeadlinesBriefing favicon HeadlinesBriefing.com

OpenAI launches $25k bio safety bug bounty for GPT-5.5

OpenAI Blog •
×

OpenAI has opened a Bio Bug Bounty targeting its upcoming GPT-5.5 model. Researchers with red‑team, security, or biosecurity backgrounds are invited to craft a universal jailbreak that defeats a five‑question bio‑safety test. The contest focuses on the Codex Desktop deployment of GPT‑5.5, and only a clean chat without moderation cues counts.

The prize pool peaks at $25,000 for the first participant who produces a prompt passing all five queries. Smaller awards may follow partial successes at OpenAI’s discretion. Applications opened on April 23, 2026 and remain rolling until June 22, 2026; testing runs from April 28 to July 27. Selected teams must sign an NDA and use existing ChatGPT accounts.

By forcing attackers to reveal a universal bypass, OpenAI hopes to harden GPT‑5.5 against misuse in synthetic biology, drug design, and other high‑risk domains. The bounty reflects growing industry pressure to pre‑emptively address bio‑security threats before models reach broader deployment. Successful participants will contribute concrete data that directly informs the next round of safety mitigations.

OpenAI will invite a vetted list of bio‑red‑teamers and also review new applicants through a short submission form requiring name, affiliation, and relevant experience. All findings remain under NDA, but the disclosed prompts and completions will be archived for internal analysis. The initiative marks a rare public call‑out for collaborative security testing at the frontier of AI‑driven biology.