Sam Altman Accuses Anthropic of Using Fear to Monopolize AI Access
OpenAI CEO Sam Altman has publicly criticized Anthropic's strategy of restricting access to its latest Claude model, arguing the company is using fear-based messaging to justify keeping powerful AI technology in the hands of a select few. Altman's comments mark an escalation in the ongoing competitive tension between OpenAI and Anthropic, the AI safety company founded by former OpenAI researchers including Dario Amodei.
What Is Anthropic's Restricted Release Strategy?
Anthropic has chosen to limit access to Claude Mythos, its most advanced model, to select organizations such as Google and Microsoft rather than releasing it publicly. The company justifies this approach by citing the model's advanced capability in spotting cybersecurity vulnerabilities, arguing that unrestricted access could pose security risks. This selective disclosure strategy has sparked debate within the tech industry about whether such restrictions genuinely serve safety purposes or mask other business motivations.
In a candid conversation with podcaster Ashlee Vance, Altman compared Anthropic's approach to a troubling sales tactic. He stated that the company is essentially saying, "We've created something powerful and potentially dangerous, and now we're offering you the protection for a hefty price, but only if we deem you worthy". This characterization frames Anthropic's safety rationale as a marketing mechanism rather than a legitimate protective measure.
Ashlee Vance, Altman compared Anthropic's approach to a troubling sales tactic
How Are the Two AI Companies Approaching AI Access Differently?
The philosophical divide between OpenAI and Anthropic reflects fundamentally different views on how AI technology should be distributed:
- OpenAI's Approach: Altman emphasizes OpenAI's commitment to broader dissemination of AI technology, arguing the company aims to "empower everyone with technology while emphasizing shared responsibility". While acknowledging that certain "very dangerous models" might need controlled releases, OpenAI positions itself as pursuing an inclusive journey with AI advancement.
- Anthropic's Approach: The company maintains that selective disclosure of advanced models like Claude Mythos is necessary to prevent misuse of powerful cybersecurity tools. By limiting access to vetted organizations, Anthropic argues it can better monitor how the technology is deployed and prevent harmful applications.
- Industry Concentration Concerns: Altman suggests that fear-based strategies may be used to justify keeping AI "within a small circle," pointing to a broader tech industry tendency to concentrate AI power among established players.
Altman's critique extends beyond corporate rivalry to fundamental questions about power and access. He argues that some stakeholders "have always wanted to keep AI within a small circle, and fear-based strategies might just be the ticket to justify it". This observation touches on a deeper concern: whether safety arguments are being used as a cover for maintaining competitive advantage.
"What they're essentially saying is, we've created something powerful and potentially dangerous, and now we're offering you the protection for a hefty price, but only if we deem you worthy," remarked Sam Altman, CEO of OpenAI.
Sam Altman, CEO at OpenAI
The tension between these two approaches reflects a broader challenge in AI governance. The line between ensuring safety and monopolizing advancements remains blurred, with legitimate concerns on both sides. Anthropic's caution about powerful cybersecurity tools could prevent misuse, while OpenAI's push for broader access could accelerate beneficial innovation and democratize AI capabilities.
Dario Amodei, Anthropic's CEO and a former OpenAI researcher, has not publicly responded to Altman's latest criticism, though the two executives have clashed before over AI safety and deployment strategies. The debate underscores how the AI industry's two most prominent safety-focused companies have diverged in their practical implementation of safety principles.
As the AI landscape continues to evolve, the question of who gets access to powerful models and under what conditions will likely remain contentious. Whether Anthropic's selective approach represents prudent risk management or strategic gatekeeping may ultimately depend on how Claude Mythos performs in the hands of its approved users and whether any security incidents emerge from broader access to similar capabilities elsewhere in the industry.