Why DeepSeek Is Building Data Centers in Frozen Inner Mongolia While OpenAI Raises Prices

DeepSeek is abandoning its asset-light playbook and investing billions in self-built data centers in Inner Mongolia's sub-zero climate, a dramatic shift that signals how geopolitical chip embargoes are forcing Chinese AI companies to control their entire infrastructure stack, from power grids to server rooms. The company's new V4 model, released in April 2026, offers free access to features that OpenAI's GPT-5.5 charges premium rates for, but this pricing strategy only works if DeepSeek owns the physical infrastructure powering it.

What Changed Between DeepSeek's Old and New Business Model?

For years, DeepSeek built its reputation on doing more with less. The company famously trained its R1 model for under $6 million, a fraction of what competitors spend, and maintained an asset-light approach focused purely on algorithm development. That era has ended. In early 2026, DeepSeek posted job listings for Senior Data Center Delivery Managers and Senior Operations Engineers in Ulanqab, Inner Mongolia, with salaries up to 30,000 yuan monthly, signaling a complete operational transformation.

The trigger was geopolitical. In 2025, the U.S. Department of Commerce tightened export controls on artificial intelligence chips to China, cutting off supplies of Nvidia's H100 and H800 processors and adding even the downgraded H20 to the restricted list. This forced DeepSeek to pivot entirely to Huawei's Ascend chip ecosystem, which required building domestic infrastructure from scratch.

The V4 model's trillion-parameter scale, with pre-training data reaching 33 trillion tokens, demands massive computational resources. Supporting this requires tens of thousands of Ascend chips, data centers capable of housing them, power grids to supply electricity, and maintenance teams working in temperatures as low as negative 20 degrees Celsius. This is no longer a software problem; it's a physical infrastructure challenge.

How Does DeepSeek's Pricing Strategy Compare to OpenAI's?

The pricing divergence between the two companies reveals their fundamentally different business philosophies. OpenAI's GPT-5.5 charges $5 per million input tokens and $30 per million output tokens for its standard version, with the Pro version costing $30 per million input tokens and $180 per million output tokens. These are premium prices for premium capabilities.

DeepSeek's V4-Flash, by contrast, costs only 0.2 RMB (roughly $0.03) per million tokens for cached inputs and 2 RMB for outputs. Even V4-Pro, which rivals top-tier closed-source models, costs just 1 RMB for cached inputs and 24 RMB for outputs. The price difference is not marginal; it's transformational.

To understand what this means in practice, consider a real-world example: you can upload three years of company contracts, emails, and financial statements to V4 and ask it to find a hidden breach-of-contract clause on page 47 of an appendix. In the past, this task required hiring a legal team. Now, it's free.

Why Is Inner Mongolia the Decisive Battleground for AI Infrastructure?

Inner Mongolia, specifically the city of Ulanqab, has become the physical foundation of China's AI resistance to computing power blockades. The region offers two critical advantages: abundant green electricity from renewable sources and access to China's ultra-high-voltage power grid. These resources allow DeepSeek to operate data centers at scale while keeping energy costs low enough to sustain aggressive pricing.

The contrast is stark. On one side are AI engineers in Silicon Valley writing code in plaid shirts while sipping hand-drip coffee. On the other are operations and maintenance personnel wrapped in military overcoats guarding data centers deep in the Inner Mongolian grasslands. This physical divide represents the backbone of China's AI strategy in an era of chip embargoes.

Steps to Understanding DeepSeek's Infrastructure-First Strategy

  • Chip Embargo Impact: U.S. restrictions on Nvidia exports forced DeepSeek to abandon its asset-light model and build infrastructure around Huawei's Ascend chips, which require purpose-built data centers and power systems.
  • Geographic Advantage: Inner Mongolia's combination of green electricity and ultra-high-voltage grid access enables DeepSeek to operate massive data centers at lower costs than competitors in other regions.
  • Pricing as Infrastructure Control: By offering near-free access to advanced AI capabilities, DeepSeek trades short-term revenue for ecosystem lock-in and greater control over the infrastructure that powers its models.
  • Operational Complexity: Managing tens of thousands of chips in sub-zero temperatures requires specialized teams, supply chain coordination, and continuous maintenance, transforming DeepSeek from a software company into a heavy-asset operator.

What Does This Mean for the Future of AI Competition?

The shift from capability-focused competition to infrastructure-focused competition marks a fundamental change in how AI companies compete. OpenAI's strategy is to sell premium access to powerful models through subscriptions. DeepSeek's strategy is to control the entire stack, from chips to power grids, and use free or near-free pricing to build an ecosystem that depends on its infrastructure.

This transformation comes at enormous cost. Building data centers, purchasing chips, and laying network cables represent bottomless financial pits. More critically, this heavy-asset model means operating costs will rise exponentially while DeepSeek's commercial revenue remains extremely limited. The company is essentially trading losses for ecosystem control and greater leverage over infrastructure.

In April 2026, DeepSeek announced its first external funding round, targeting a valuation of 300 billion RMB (approximately $44 billion USD) and planning to raise 50 billion RMB, including 30 billion RMB from external sources. Rumors swirled that Tencent and Alibaba were competing to invest. However, the funding was driven not just by infrastructure costs but also by talent retention challenges.

During the critical development phase of V4, major Chinese tech companies launched aggressive recruitment campaigns against DeepSeek. At least five core research and development members departed, including Wang Bingxuan, the core author of the first-generation model, who joined Tencent; Luo Fuli, a core contributor to V3, who was recruited by Lei Jun to Xiaomi with a multi-million yuan annual salary; and Guo Daya, the core author of R1, who joined ByteDance's Seed team.

This talent exodus reveals a deeper challenge: when competitors have unlimited resources and you insist on operating with your own capital, the talent market becomes your most vulnerable weakness. You can ask brilliant engineers to accept lower pay and work overtime for the ideal of democratizing AI, but when a large company offers millions in cash and stock options plus unlimited computing resources, the pricing power of idealism weakens significantly.

The broader implication is that AI competition is no longer purely about algorithmic innovation or model capability. It's about who can build, operate, and control the physical infrastructure that powers next-generation models. DeepSeek's transformation from a pure algorithm company to a heavy-asset operator signals that in an era of geopolitical restrictions on chips, controlling infrastructure has become as important as controlling code.