Anthropic has launched a new cybersecurity initiative designed to explore how advanced AI can help identify and address software vulnerabilities across the global technology ecosystem.
The program, known as Project Glasswing, brings together major technology companies like Microsoft and AWS, along with organizations responsible for maintaining critical digital infrastructure, to test an advanced, unreleased AI system built specifically for security analysis.
At the center of the project is a frontier AI model called Claude Mythos, which partnering companies can access and use. It reportedly has strong coding and reasoning capabilities that can analyze complex software environments and identify potential vulnerabilities.
“We’ve been testing Claude Mythos Preview in our own security operations, where it’s already helping us strengthen our code,”
AWS said in response to its participation in the program.
The initiative reflects growing concern across the cybersecurity industry about the rapid evolution of AI capabilities. As models become better at analyzing code and identifying weaknesses, experts warn that both defenders and attackers could gain powerful new tools. Anthropic says Project Glasswing is a response to that new reality.
Project Glasswing is designed to explore how these capabilities can be deployed responsibly to strengthen defensive security practices before such systems become widely accessible.
Inside Project Glasswing and the Claude Mythos Model
Claude Mythos, unlike Anthropic’s publicly available models, has not been released due to concerns about the risks associated with its capabilities. Instead, access is restricted to a carefully selected group of organizations across the technology and cybersecurity sectors.
Despite this, Anthropic says the model has already demonstrated a remarkable ability to identify software vulnerabilities among participating companies. During internal testing, Mythos reportedly discovered thousands of high-severity flaws, including vulnerabilities affecting major operating systems and widely used web browsers.
In many cases, these flaws had remained undetected despite years of security research. The model has also demonstrated an ability to generate working exploits for certain vulnerabilities.
According to Anthropic’s research, Mythos successfully developed working exploits in 181 attempts during testing, whereas earlier models such as Claude Opus 4.6 showed almost no success in the same benchmark.
“When tested against CTI-REALM, our open-source security benchmark, Claude Mythos Preview showed substantial improvements compared to previous models,”
Igor Tsyganskiy, Global CISO and EVP of Security and Research at Microsoft, said.
Anthropic noted that these capabilities were not explicitly trained into the model but instead emerged from improvements in coding ability, reasoning, and autonomy.
Participation in the initiative includes more than 40 organizations responsible for building or maintaining key software infrastructure. Companies involved include Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorgan Chase, the Linux Foundation, Microsoft, Nvidia, and Palo Alto Networks. Anthropic has also committed up to $100 million in usage credits for the project alongside $4 million in funding for open-source security efforts.
A Strategic Bid to Define AI’s Role in Cyber Defense
While Project Glasswing focuses on vulnerability discovery, the broader significance of the initiative lies in how Anthropic is positioning itself within the cybersecurity ecosystem.
The project signals an ambition to move beyond being solely an AI model developer and toward becoming a foundational provider of security infrastructure powered by frontier AI.
The scale of the initiative reflects that ambition. By partnering with leading technology companies, cloud providers, and infrastructure maintainers, Anthropic is building a network capable of testing and deploying AI-driven defensive capabilities across large portions of the global software ecosystem.
By working directly with these actors, Anthropic is integrating its models into the operational layers of cybersecurity rather than treating them as standalone tools.
Taken together, the scale of the initiative, the partner roster, and the significant financial commitments suggest a broader goal: Anthropic is not simply experimenting with AI-powered vulnerability discovery; it is attempting to position itself as a company capable of shaping how frontier AI is integrated into the defensive security ecosystem.
AI as the Next Layer of Cybersecurity Infrastructure
Project Glasswing highlights how Anthropic is attempting to expand its role beyond that of an AI model developer. By embedding its technology within organizations responsible for maintaining critical software infrastructure, the company is positioning itself closer to the operational core of cybersecurity.
The initiative places Anthropic’s models directly in the hands of companies that build and secure much of the world’s digital ecosystem. Cloud providers, chipmakers, enterprise software vendors, and open-source maintainers all play a role in sustaining the systems that underpin modern computing.
By working with these actors, Anthropic aims to make its AI an essential layer of a company’s tech stack rather than a bolt-on layer like a copilot.
The financial scale of the project reinforces that ambition. With up to $100 million in usage credits and direct funding for open-source security initiatives, Anthropic is investing heavily in building long-term relationships with the communities and organizations that maintain foundational software. The company aims to be a sustained participant in the cybersecurity ecosystem rather than simply offering AI tools from the sidelines.