As part of our ongoing efforts to strengthen our safeguards for advanced AI capabilities in biology, our bio bug bounty is now open for applications. We’ve deployed the ChatGPT agent model and are actively working to further strengthen safety protections for ChatGPT agent and other models. We’re inviting researchers with experience in AI red teaming, security, or chemical and biological risk to try to find a universal jailbreak that can defeat our ten-level bio/chem challenge.
- Model in scope: ChatGPT agent only.
- Challenge: Identify one universal jailbreaking prompt to successfully answer all ten bio/chem safety questions from a clean chat.
- Rewards:
• $25,000 to the first true universal jailbreak to clear all ten questions.
• $10,000 to the first team that answers all ten questions with multiple jailbreak prompts.
• Smaller awards may be granted for partial wins at our discretion. - Timeline: Applications open July 17, 2025 with rolling acceptances. Testing begins July 29, 2025.
- Access: Application and invite-only. We will extend invitations to a vetted list of trusted bio red-teamers and review new applications. Once selected, successful applicants will be onboarded to the bio bug bounty platform.
- Disclosure: All prompts, completions, findings, and communications are covered by NDA.
Submit a short application here(opens in a new window) (name, affiliation, brief track record, and a 150-word plan) by July 29, 2025. Accepted applicants and collaborators must have existing ChatGPT accounts to apply, and will sign a NDA.
Apply now and help us make frontier AI safer.