Why AI's Biggest Companies Are Killing Products to Stay Afloat
The world's largest AI companies are hitting a critical inflection point: they're burning through computing resources faster than they can generate revenue, forcing difficult choices about which products survive and which get sacrificed. OpenAI recently killed its Sora video-generation app, abandoning a $1 billion Disney licensing deal in the process, while Anthropic restricted access to the OpenClaw agent framework through standard subscription plans, forcing users onto pay-as-you-go pricing that costs substantially more . These aren't isolated incidents; they signal a broader crisis in the AI industry as companies race toward profitability before their anticipated initial public offerings (IPOs).
What's Driving the AI Monetization Crisis?
The problem stems from a fundamental mismatch between how AI companies expected their products to be used and how customers actually use them. AI agents, which are autonomous systems that can perform complex tasks with minimal human intervention, consume far more computing power than traditional large language models (LLMs) . When customers deploy these agents through standard subscription plans, they burn through computing tokens at rates the companies never anticipated, making those products economically unsustainable.
The financial pressure is immense. Both Anthropic and OpenAI have made projections to investors showing hundreds of billions in revenue and profitability by the end of the decade, according to documents leaked to the Wall Street Journal . But reaching those targets requires these companies to make hard decisions now about which products to support, which to kill, and how aggressively to monetize their user base.
How Are AI Companies Responding to the Compute Crisis?
- Product Elimination: OpenAI discontinued Sora, its text-to-video generation tool, because the computational costs of running the service exceeded the revenue it generated, freeing up computing resources for more profitable products like Codex.
- Pricing Restructuring: Anthropic moved users away from standard subscription access to the OpenClaw agent framework, forcing them onto pay-as-you-go pricing models that cost substantially more per use.
- Resource Reallocation: Companies are shifting computing power away from experimental or lower-revenue products toward high-demand services that generate better margins.
These moves reveal the existential pressure facing the AI industry. The companies that built their reputations on openness and accessibility are now making decisions driven by pure economics . The question isn't whether AI is safe or beneficial; it's whether the business model can survive long enough to reach profitability.
Why Should You Care About the AI Monetization Cliff?
The decisions AI companies make now will shape what products and services exist in the future. If profitability requires restricting access, eliminating features, or dramatically raising prices, the democratization narrative that has defined the AI boom may be ending. Users who relied on affordable access to cutting-edge AI tools may find themselves priced out or locked into expensive enterprise plans.
Additionally, the pressure to reach profitability could influence how these companies approach safety and governance. When survival depends on maximizing revenue and minimizing costs, the incentives to invest in safety research, transparency, or regulatory compliance may weaken. This creates a tension between the stated commitment to responsible AI development and the financial realities of running billion-dollar companies.
The broader AI industry has long expressed concerns about safety in advanced systems. Over 1,000 AI researchers and executives signed an open letter in January 2023 calling for a pause on developing powerful AI systems until new safety protocols could be established . That letter warned of risks including AI-driven misinformation, algorithmic bias, and the potential for advanced AI to be used maliciously. Yet the current moment suggests that market forces, not safety concerns, are driving the most consequential decisions about AI development and deployment.
The coming months will be critical. As Anthropic and OpenAI barrel toward their anticipated IPOs, the pressure to demonstrate profitability will only intensify. The choices they make about which products to support, how to price their services, and where to allocate computing resources will reveal whether the AI industry can balance commercial success with the accessibility commitments that have defined its public messaging .