Why Anthropic Is Keeping Its Most Powerful Hacking Tool Under Lock and Key

Anthropic has developed an AI tool called Mythos that can uncover thousands of security weaknesses in major operating systems and web browsers, but the company is deliberately restricting access to prevent hackers from weaponizing it. Instead of releasing Mythos publicly, Anthropic is sharing the technology only with select major companies like Amazon, Apple, Cisco, JPMorgan Chase, and Nvidia through an initiative called Project Glasswing, allowing these organizations to strengthen their defenses before the tool potentially falls into malicious hands .

What Makes Mythos Different From Other Security Tools?

Mythos represents a significant leap in AI-powered vulnerability detection because it can rapidly scan thousands of lines of code and identify problems that humans might miss entirely. The tool has already uncovered thousands of weak points across every major operating system and web browser, demonstrating capabilities that could either protect critical infrastructure or, if misused, enable devastating cyberattacks .

The power of AI in finding bugs comes from its speed and pattern recognition abilities. Unlike human security researchers who manually review code, AI models can process vast amounts of information simultaneously and detect subtle vulnerabilities that might take humans weeks or months to discover. This efficiency is precisely what makes Mythos valuable for defense, but also dangerous if weaponized .

Why Are Federal Officials and Financial Leaders Concerned?

The implications of Mythos have caught the attention of top government and financial officials. Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell met with major bank CEOs in a closed-door meeting to discuss Mythos and other emerging AI-related cybersecurity risks. Additionally, Anthropic briefed senior U.S. government officials and key industry stakeholders on the tool's capabilities .

International financial leaders are equally alarmed. IMF Managing Director Kristalina Georgieva warned that the world lacks the ability to protect the international monetary system against massive cyber risks stemming from AI, noting that risks have been growing exponentially. She emphasized the need for stronger guardrails to protect financial stability in an AI-driven world .

How Are Hackers Already Using AI for Attacks?

While Anthropic carefully controls Mythos access, the reality is that threat actors already have access to advanced AI models and are actively weaponizing them. Hackers are using AI to execute a range of malicious activities, including spreading malware, carrying out identity theft scams, producing deepfake videos, and launching ransomware attacks. Some of these attacks operate autonomously, meaning they can execute without direct human intervention .

The speed at which AI capabilities are being weaponized is accelerating. According to management consulting firm PwC, the time between when an AI company releases a new capability and when threat actors weaponize it shrank dramatically in 2025, a trend expected to accelerate further in 2026. AI-enabled tools have empowered even low-skilled attackers to execute high-speed, high-volume operations, while advanced adversaries are using AI to increase precision, scale automation, and compress attack timelines .

What Specific Threats Are Organizations Facing Right Now?

Beyond vulnerability discovery, hackers are already leveraging AI for more targeted attacks. Phishing emails, traditionally a volume-based attack method, are becoming far more sophisticated and personalized. Security professionals are seeing AI used to script customized phishing dialogues and emails tailored to specific individuals, making them much harder to detect and identify as fraudulent .

Zach Lewis, Chief Information Officer at the University of Health Sciences and Pharmacy in St. Louis, explained the evolving threat landscape. He noted that once tools like Mythos become more widely available, organizations should expect a significant increase in cyberattacks until they can patch vulnerabilities almost in real time .

"It's been used to really script those dialogues, those conversations, those phishing emails, to specific people and really customize them to make them a lot more difficult to detect and identify if these are fake or not," Lewis said.

Zach Lewis, Chief Information Officer at the University of Health Sciences and Pharmacy in St. Louis

Steps Organizations Can Take to Prepare for AI-Powered Threats

  • Accelerate Vulnerability Patching: Organizations should prioritize identifying and patching known vulnerabilities in their systems before AI tools like Mythos become more widely available, reducing the window of exposure.
  • Enhance Email Security Training: With AI-generated phishing emails becoming more convincing, employees need regular training to recognize sophisticated, personalized attacks that may bypass traditional filters.
  • Implement Real-Time Monitoring: Security teams should deploy continuous monitoring systems that can detect unusual activity patterns and respond to threats at machine speed rather than human speed.
  • Collaborate With Industry Partners: Following Anthropic's Project Glasswing model, organizations should share threat intelligence and vulnerability information with peers to collectively strengthen defenses.

Is Anthropic's Approach Responsible or Strategic?

Security experts and analysts have raised questions about Anthropic's motivations for restricting Mythos access. Some speculate that the limited release could be a marketing strategy aimed at generating interest from prospective customers. Both Anthropic and rival OpenAI are expected to launch initial public offerings by the end of the year, according to the Wall Street Journal, creating potential incentives to generate headlines and demonstrate responsible AI stewardship .

Peter Garraghan, founder and Chief Science Officer at Mindgard, an AI security platform, suggested that Anthropic may be using this approach as a marketing tool ahead of its anticipated IPO. However, Anthropic has consistently positioned itself as a company prioritizing AI safety and responsible development, distinguishing itself from competitors through public emphasis on guardrails and ethical considerations .

"When facing the tough decisions, Anthropic has actually been true to its values. Curating the release of Mythos does allow them to look to be the protectors of this responsible AI, but it also is a great marketing and advertising tool," explained Malek Ben Sliman, a marketing lecturer at Columbia Business School.

Malek Ben Sliman, Marketing Lecturer at Columbia Business School

What Do Security Experts Say About the Bigger Picture?

The emergence of Mythos highlights a fundamental challenge in cybersecurity: the asymmetry between defense and offense. Alissa Valentina Knight, CEO of cybersecurity AI company Assail, emphasized that the situation represents a critical wake-up call for the industry. She warned that security teams struggled to keep pace with human hackers, and the introduction of AI-powered attacks makes the challenge exponentially more difficult .

"What we need to do is look at this as a wake-up call to say, the storm isn't coming, the storm is here. We need to prepare ourselves, because we couldn't keep up with the bad guys when it was humans hacking into our networks. We certainly can't keep up now if they're using AI because it's so much devastatingly faster and more capable," Knight told CBS News.

Alissa Valentina Knight, CEO of Assail

The fundamental issue is that humans remain the weakest link in security. Humans make mistakes when writing code, and vulnerabilities can exist in source code for years without being discovered by human reviewers. AI's ability to systematically scan and identify these weaknesses at scale represents both an opportunity for defense and a risk if the technology is weaponized .

Anthropic acknowledged the stakes in its official statement, noting that the fallout from misusing tools like Mythos could be severe for economies, public safety, and national security. The company's decision to restrict access reflects recognition that the technology's power demands careful stewardship, even as the broader cybersecurity landscape already faces threats from AI-enabled attackers who have no such constraints .