The AI Bubble's Hidden Problem: When Grand Promises Meet Circular Investments

OpenAI's financial trajectory raises serious questions about whether the company's ambitious promises can justify its massive infrastructure spending. While OpenAI has achieved $20 billion in annual recurring revenue as of late 2025 and early 2026, the company has signaled plans to spend $1.4 trillion on infrastructure over the next eight years, according to statements by CEO Sam Altman and CFO Sarah Friar . This means current annual revenue represents less than 1.5 percent of the promised expenditure, creating a fundamental mismatch between earnings and investment that concerns value investors.

What Are the Real Financial Risks Behind OpenAI's Growth Story?

The concern extends beyond raw numbers. A circular funding pattern has emerged where major technology companies like Microsoft and Nvidia invest billions into OpenAI, and OpenAI then spends those same billions on Microsoft's cloud services and Nvidia's chips . This creates what some analysts describe as a "phantasmal picture of revenue growth." If Nvidia is essentially funding its own future sales through venture investments in OpenAI, the quality of those earnings becomes questionable. The critical question for investors is whether the revenue growth would vanish if the investment cycle stopped.

Beyond the immediate financial mechanics, Altman's track record raises additional concerns. His first company, Loopt, was a location-sharing app that Altman claimed had massive user numbers. However, when the business was sold to Green Dot and subsequently shut down, it was revealed that Loopt had only 500 daily active users, despite Altman's public insistence that the user base was "way more users than any other similar service" . This pattern of overstated claims versus actual results has become relevant again as investors evaluate OpenAI's promises.

How Can Investors Evaluate Potential Conflicts of Interest in AI Leadership?

Beyond OpenAI itself, Altman's personal investment portfolio reveals a complex web of interests that may create conflicts. Consider the following areas where Altman has positioned capital:

  • Energy Infrastructure: Altman is a major investor in nuclear energy companies including Helion and Oklo, which would directly benefit from increased electricity demand driven by AI data centers.
  • Data Sources: Altman owns a material share of Reddit and was on its board until 2022; OpenAI has agreements to scrape Reddit's content to train its language models, creating a direct financial incentive.
  • Supply Chain Control: Altman is invested in companies providing AI networking equipment, thermal battery technology through Exowatt, and rare earth metal mining via KoBold Metals, all essential for sustaining massive server farms.
  • Risk Mitigation: Altman has positioned capital in companies offering protection against AI risks, including identity verification through Worldcoin to prevent deepfakes, and insurance for losses from AI scams and hacking.

The pattern suggests what some analysts describe as a closed loop of influence. Altman is not merely building AI tools; he has strategically positioned his capital across the entire value chain necessary for OpenAI's continued existence. When inevitable problems arise from the AI transition, such as energy crises, logistical bottlenecks, or social friction, Altman would be positioned to profit from the solutions to complications he helped create .

One particularly notable example involves Reddit. In 2015, Altman made a public pledge to the Reddit community, promising that he and his fellow investors would return 10 percent of the platform's value to users. This commitment remains unfulfilled and is, according to some observers, "conveniently" obscured by complex regulatory hurdles . Meanwhile, OpenAI's agreement to scrape Reddit content for training data creates direct financial benefit for Altman's stake in the platform.

What Happens If AI's Breakthrough Doesn't Arrive as Promised?

The most significant risk lies in the fundamental assumption underlying all this investment. Altman has made sweeping claims about AI's capabilities, stating that artificial intelligence will solve housing crises, cancer, poverty, climate change, mental health challenges, democracy problems, and numerous other human challenges . He has also claimed that AI will bring the marginal cost of energy "rapidly towards zero" and create "universal extreme wealth."

If these breakthroughs do not materialize as promised, or if the energy cost reductions fail to materialize, the trillions of dollars currently being poured into data centers and specialized chips would represent the most expensive white elephants in history. The infrastructure investments would generate returns far below expectations, and Altman's strategically positioned portfolio would lose much of its value proposition.

There is also a governance concern about how AI companies are securing support. By warning of AI dangers, including superintelligence and autonomous machines, AI CEOs have secured government backing. By tying AI infrastructure rollout to the power grid and seeking government "backstops," as OpenAI's CFO has alluded to, AI companies shift risk onto the public while privatizing profits . This arrangement raises questions about whether the financial model depends on continued government support rather than genuine market demand.

For disciplined investors, the widening gap between grand public declarations and the underlying financial mechanics represents a significant red flag. The question is no longer whether AI will transform society, but whether the current investment structure can sustain itself if transformation takes longer than promised, or if the promised benefits prove more limited than anticipated.