OpenAI's Safety Retreat: How the Company Abandoned Its Core Mission
OpenAI was founded on a radical premise: that artificial general intelligence (AGI) posed an existential risk to humanity and required a nonprofit structure to prioritize safety over profit. According to a sweeping New Yorker investigation based on hundreds of pages of internal documents, that founding mission has largely evaporated . The company has closed most of its safety teams, become a for-profit entity, and replaced board members who questioned leadership decisions with allies of CEO Sam Altman, raising urgent questions about oversight of the technology many experts consider humanity's greatest long-term risk.
What Happened to OpenAI's Safety Teams?
The transformation became visible after Altman's brief ousting and reinstatement in late 2023, an event employees called "the Blip." Before this turbulent period, OpenAI approached AGI cautiously and maintained dedicated safety infrastructure. After Altman returned, the company's culture shifted dramatically, with AGI becoming a corporate "North Star" rather than a cautionary concern . The most striking casualty: OpenAI disbanded key safety teams, including the existential AI risk team and the superalignment team, which was co-led by former chief scientist Ilya Sutskever .
The superalignment team's fate is particularly telling. In mid-2023, OpenAI publicly pledged a fifth of its computing power to this team, tasked with preventing AI from causing "the disempowerment of humanity or even human extinction." In reality, the team received only 1 to 2 percent of computing resources, allocated on the oldest hardware, before being dissolved entirely . When a reporter asked an OpenAI representative about researchers working on existential safety, the response was telling: "That's not, like, a thing."
How Did OpenAI's Safety Safeguards Collapse?
The company's structural protections against commercial pressure have systematically eroded. OpenAI's original charter included a "merge and assist" clause, championed by then-safety lead Dario Amodei, which would have required the company to stop competing with other AI firms if they developed AGI more safely first, instead donating its resources to that rival . This clause reflected OpenAI's nonprofit ethos. But when Microsoft invested $1 billion in 2019, the deal included a veto power over any such merger, effectively neutering the safety provision .
Other safeguards have similarly weakened:
- Board Composition: The board empowered to fire the CEO has been filled with Altman's allies, including economist Larry Summers and former Facebook CTO Bret Taylor, who now serves as chairman. The board members who orchestrated Altman's attempted ouster were replaced .
- Charter Authority: Insiders say the company charter no longer guides OpenAI's behavior, despite being the foundational document that committed the company to prioritizing humanity over profit .
- Accountability Mechanisms: An independent investigation into allegations that led to Altman's attempted ousting did not produce a written report, leaving no public record of findings .
What Are the Allegations Against Sam Altman?
The New Yorker investigation, based on secret memos compiled by Sutskever and over 200 pages of notes from Amodei, alleges a troubling pattern of behavior from Altman. According to the reporting, the board initially fired Altman because members did not find him trustworthy enough to "have his finger on the button" of artificial superintelligence, a theoretical AI system that could outperform human intelligence across all domains . Sutskever's memos began with a list headed "Sam exhibits a consistent pattern of..." with the first item: "Lying" .
The allegations span multiple domains. Altman allegedly told U.S. intelligence officials that China had launched a major AGI development project and requested government funding for a counteroffensive, but then failed to provide evidence when asked . He reportedly misrepresented safety approvals for GPT-4, claiming the model had been approved by a safety panel when documentation later showed this was inaccurate . When Altman told former CTO Mira Murati that the company's general counsel had downplayed the need for safety approvals, the general counsel stated he was "confused where sam got that impression" .
When Altman
Microsoft senior executives, with whom OpenAI has partnered since 2019, described Altman as someone who "misrepresented, distorted, renegotiated, reneged on agreements." One executive reportedly said there is "a small but real chance he's eventually remembered as a Bernie Madoff- or Sam Bankman-Fried-level scammer" . Altman's pattern of behavior predates OpenAI; employees at his previous startup Loopt and at Y Combinator, where he served as president for five years, reportedly sought his removal due to concerns about transparency and trustworthiness .
"The problem with OpenAI is Sam himself," Amodei wrote in his private notes .
Dario Amodei, CEO at Anthropic
How to Understand the Implications for AI Safety
The collapse of OpenAI's safety infrastructure matters because the company is the leading developer of the technology many experts consider a potential existential threat. Understanding the implications requires examining several key dimensions:
- Scale of Deployment: OpenAI's AI is used by tens of millions of people worldwide for health advice, workplace automation, homework assistance, and companionship. ChatGPT is deployed throughout the federal government and has been sold to the Pentagon, meaning safety failures could affect national security and public health .
- Absence of Oversight: With safety teams disbanded and the board filled with Altman allies, there is no longer an internal institutional check on the company's development trajectory. The company that was supposed to prove you could build powerful AI while maintaining accountability to the public good has dismantled the structures designed to enforce that accountability .
- Financial Pressure vs. Safety: Altman is reportedly pushing for an IPO as soon as the fourth quarter of 2026 and has committed to spending $600 billion over five years, despite expectations that OpenAI will burn more than $200 billion before becoming profitable. CFO Sarah Friar reportedly doubts the company is ready for public markets, raising questions about whether financial pressures are overriding safety considerations .
When asked about safety, Altman told the New Yorker that his "vibes don't match a lot of the traditional AI-safety stuff," and said only vaguely that OpenAI would "run safety projects, or at least safety-adjacent projects" . This statement from the CEO of the world's most influential AI company represents a stark departure from the safety-first ethos that defined OpenAI's founding.
Altman
The investigation reveals a company caught between its founding mission and commercial imperatives, with the latter winning decisively. As OpenAI prepares for an IPO and continues racing to build more powerful models, the question of who is responsible for ensuring these systems don't pose existential risks remains largely unanswered.