The AI Safety Theater Debate: Why Sam Altman Says Anthropic Is Using Fear to Sell Cybersecurity
Sam Altman has publicly criticized Anthropic's marketing strategy around its new Mythos cybersecurity model, calling it "fear-based marketing" designed to create artificial scarcity and justify premium pricing. The dispute highlights a fundamental tension in the AI industry: how companies communicate risk, justify access restrictions, and compete in a market where perception of capability matters as much as the technology itself .
What Is Anthropic's Mythos Model and Why the Controversy?
Anthropic, founded by former OpenAI researchers, launched Mythos in April 2026 as a specialized frontier model engineered specifically for cybersecurity applications. The company positioned it as so powerful that it could potentially be weaponized by malicious actors, justifying a tightly controlled, enterprise-only rollout through an initiative called Project Glasswing . Anthropic provided access only to a limited batch of software providers to test and safeguard their own systems against potential cyberattacks.
However, this cautious approach immediately drew scrutiny. Bloomberg reported that unauthorized users gained access to Mythos from the first day of its announcement through various methods, including leveraging accounts of third-party contractors working with Anthropic . Notably, these unauthorized users were interested in exploring the model rather than conducting malicious activities and did not run cybersecurity-related prompts on it.
How Does Altman Characterize Anthropic's Strategy?
During an April 21, 2026 appearance on the "Core Memory" podcast, Altman offered a pointed critique of Anthropic's justification for limiting access. He framed the company's messaging not as prudent safety protocol but as calculated marketing designed to create fear and justify premium enterprise pricing .
"It is clearly incredible marketing to say, 'We have built a bomb, we are about to drop it on your head. We will sell you a bomb shelter for $100 million,'" Altman stated, employing a vivid metaphor to critique his competitor's approach.
Sam Altman, CEO at OpenAI
Altman further suggested this strategy aligns with a broader desire among certain factions to restrict advanced AI to a privileged few. He noted that this justification can take many forms, but the underlying goal remains the same: concentrating power in the hands of a smaller group .
What Are the Core Disagreements Between OpenAI and Anthropic?
The Mythos dispute reflects deeper philosophical differences between the two AI labs that have manifested in previous public exchanges. These companies, despite sharing common roots, have increasingly diverged in their approaches to AI deployment and public messaging .
- Risk Disclosure Philosophy: Anthropic emphasizes extreme caution and highlights worst-case dual-use potential, while OpenAI argues this approach amounts to fear-mongering used for commercial gain.
- Access Model Strategy: Anthropic restricts high-stakes models to enterprise clients only, which OpenAI characterizes as elitist and as concentrating power rather than enabling broad innovation.
- Safety Narrative: Anthropic contends that safety requires controlled, limited release environments, whereas OpenAI maintains that safety is best achieved through transparency and widespread testing.
This is not the first time Altman has publicly criticized Anthropic. Recently, when users reported changes to Anthropic's Claude Code pricing, Altman highlighted that OpenAI's Codex is available in both free and Plus plans, emphasizing his desire for people to use AI broadly . Previously, Altman stated without naming Anthropic that he did not want Codex users to face reduced usage limits, a likely reference to Claude Code users hitting limits faster than before.
Why Does This Debate Matter Beyond Corporate Rivalry?
The Altman-Anthropic dispute occurs against a backdrop of increasing regulatory scrutiny worldwide. Legislative bodies in the United States, European Union, and elsewhere are actively crafting rules for high-risk AI applications. The rhetoric companies use today directly informs the regulatory paradigms of tomorrow .
Experts in technology ethics have observed a broader pattern in the AI industry's communication strategies. For years, discussions about artificial general intelligence (AGI) and frontier models have been punctuated by dramatic warnings about existential risk, often originating from the very companies building the technology. This creates a paradoxical situation where the sellers of a product are also its most prominent doomsayers .
These hyperbolic risk narratives can achieve several strategic objectives simultaneously. They attract media attention and establish a company as a serious player grappling with profound questions. They can influence regulatory frameworks, potentially creating barriers to entry for smaller competitors. They also justify high valuation premiums based on the world-altering potential of the technology. However, this strategy carries significant reputational risk, as Altman's comments indicate .
How to Evaluate AI Safety Claims in a Competitive Market
- Examine Access Justifications: Ask whether restrictions on AI model access are based on demonstrated risks or on creating artificial scarcity. Consider whether unauthorized access attempts have actually resulted in harmful outcomes or whether they remain theoretical.
- Compare Competing Approaches: Evaluate both restricted-access and open-iteration models on their track records. OpenAI's iterative public deployment of ChatGPT and Anthropic's controlled releases both claim safety benefits; examine which approach has produced measurable safety improvements.
- Assess Regulatory Influence: Consider how companies' risk narratives might shape policy. If policymakers perceive the industry as crying wolf for commercial advantage, they may discount genuine warnings, while accepting the most alarming assessments at face value could stifle beneficial innovation.
The broader context reveals a trilemma facing the entire advanced AI industry: how to balance rapid innovation, responsible safety protocols, and the maintenance of public trust . Anthropic's approach with Mythos prioritizes a specific interpretation of safety through controlled access. OpenAI's criticism advocates for a different path that views broad, supervised exposure as a key component of robust safety testing.
If policymakers perceive the industry as using fear-based messaging primarily for commercial positioning, they may become skeptical of legitimate safety concerns. Conversely, if they accept the most alarming risk assessments without scrutiny, they may enact regulations that inadvertently stifle beneficial innovation. The path forward requires nuanced, transparent dialogue about how tech leaders communicate risk, justify access restrictions, and compete in a market where perception of capability is as crucial as capability itself .