OpenAI just launched Daybreak, a cybersecurity initiative designed to hunt down and fix software bugs before hackers can exploit them. Powered by GPT-5.5-Cyber and Codex Security, this is a direct answer to Anthropic’s Claude Mythos and its Project Glasswing coalition. The AI cyber arms race is officially on, and billions of dollars are at stake.
What Daybreak Brings to the Fight
Daybreak is not a simple chatbot upgrade. It is a full cybersecurity platform that pairs OpenAI’s most powerful models with its Codex Security agent to scan, test, and patch vulnerable code.
CEO Sam Altman announced the initiative on X, saying, “AI is already good and about to get super good at cybersecurity; we’d like to start working with as many companies as possible now to help them continuously secure themselves.”
Here is what the system does in practice:
- Builds a threat model tailored to each specific codebase
- Maps out the most dangerous attack paths in the software
- Tests and validates vulnerabilities inside an isolated environment
- Generates audit-ready patches for developers to review and deploy
The biggest selling point is speed. Work that used to take security teams weeks of manual effort can now be completed in minutes. Codex Security scans entire repositories, flags high-risk code, and writes fixes that are ready for human review almost instantly.
OpenAI built this initiative on a three-tier model system. GPT-5.5 handles general tasks with standard safeguards. GPT-5.5 with Trusted Access for Cyber is built for defensive security workflows like malware analysis and vulnerability triage. GPT-5.5-Cyber sits at the highest level and is reserved for authorized red teaming, penetration testing, and controlled validation.
OpenAI Daybreak GPT-5.5-Cyber cybersecurity platform vs Anthropic Mythos
How Anthropic’s Mythos Kicked Off This Race
Before Daybreak existed, Anthropic had already shaken the cybersecurity world to its core. In April 2026, the company revealed Claude Mythos Preview, a frontier AI model it openly described as too dangerous to release to the public.
That was not marketing talk. Claude Mythos found 271 vulnerabilities in Firefox during a single evaluation pass. Mozilla patched every one of them in Firefox version 150. For comparison, an earlier scan using Anthropic’s Opus 4.6 model had only caught 22 bugs in the same browser.
Mythos can do far more than scan web browsers. It has discovered thousands of high-severity vulnerabilities across every major operating system and leading software platforms. According to security researchers, the model can chain software bugs into multi-step exploits, a skill previously limited to only the most elite human hackers.
Rather than opening the floodgates, Anthropic launched Project Glasswing, an industry coalition that gives select partners access to Mythos for strictly defensive purposes. The coalition is named after a species of butterfly with transparent wings.
The partner list is stacked with some of the biggest names in tech and finance:
| Project Glasswing Launch Partners |
|---|
| Amazon Web Services |
| Apple |
| Broadcom |
| Cisco |
| CrowdStrike |
| JPMorgan Chase |
| Linux Foundation |
| Microsoft |
| NVIDIA |
| Palo Alto Networks |
Anthropic committed up to $100 million in usage credits for Mythos across these efforts. It also donated $4 million directly to open-source security groups, including $2.5 million to the Linux Foundation and $1.5 million to the Apache Software Foundation.
The Breach That Shook the Industry
Even with all that careful planning, Anthropic hit a wall.
On the very same day Mythos was publicly announced in April, a small group of unauthorized users gained access to the model. A worker at a third-party contractor used their credentials to break into the protected environment. They then shared access with colleagues through a private Discord channel.
The method was surprisingly simple. The group reportedly guessed the model’s online location based on their familiarity with how Anthropic formats URLs for its other products. No sophisticated hacking tools were needed.
Anthropic said it found no evidence that the unauthorized access led to any malicious activity. But the damage to confidence was real. If a model capable of finding zero-day vulnerabilities in every major operating system can be accessed through a URL guess, the security around these tools clearly needs work.
This incident became a loud warning for the entire AI industry. It proved that building a powerful defensive tool means nothing if the tool itself is not properly guarded.
Tight Rules for a Dangerous Tool
OpenAI clearly learned from Anthropic’s stumble.
Access to GPT-5.5-Cyber is locked behind strict approval processes. Only vetted organizations responsible for protecting critical infrastructure qualify. OpenAI calls them “trusted defenders.”
Starting June 1, 2026, every individual using the most capable cyber models must enable Advanced Account Security. Organizations can alternatively confirm they already have phishing-resistant authentication built into their single sign-on systems.
“There’s an ongoing tension between the need to move quickly to stay ahead of adversaries and the need to be prudent enough to prevent misuse,” said Katrina Mulligan, head of national security partnerships at OpenAI.
The partner roster for Daybreak stretches across the entire security chain. It includes Cloudflare, Cisco, CrowdStrike, Palo Alto Networks, Oracle, Zscaler, Akamai, Fortinet, Intel, SentinelOne, Okta, Snyk, Semgrep, and many more. The goal is to cover everything from vulnerability scanning and edge protection to software supply chain defense.
This is the “dual-use” dilemma at its sharpest. The same technology that fixes code can also break it. Both OpenAI and Anthropic know this, which is why neither company is rushing to make these tools publicly available.
Where the AI Cyber Race Goes From Here
The launch of Daybreak puts OpenAI and Anthropic on a collision course. Both companies are now fully invested in using AI for proactive defense, finding and patching holes before attackers can crawl through them.
But the picture is not entirely rosy. Security researchers warn that the early advantage still goes to offense, not defense. An attacker only needs to find one weak spot. A defender has to find every single one.
The real test is not whether AI can improve cybersecurity. It already has. The real test is whether these tools can be kept out of the wrong hands long enough to make a difference.
Both companies are moving carefully. Controlled rollouts, handpicked partners, and aggressive security protocols are the order of the day. But with over 40 organizations already using Mythos through Project Glasswing and dozens more joining Daybreak, the scale of this effort is growing fast.
One thing is certain. The future of cybersecurity will not be decided by firewalls and antivirus software alone. It will be shaped by the companies building the smartest AI models, and by the choices they make about who gets to use them. The stakes touch every person who uses a phone, a laptop, or any connected device. If you have thoughts on AI taking over cyber defense, drop them in the comments and let us know where you stand.