Nvidia's $20 Billion Groq Deal: Is It a License or a Clever Acquisition in Disguise?

Nvidia structured a $20 billion deal with AI startup Groq as a licensing agreement rather than an acquisition, but the arrangement has triggered a Senate investigation because Nvidia hired Groq's founder and CEO Jonathan Ross, president Sunny Madra, and most of the engineering team while technically leaving Groq as an independent company. The deal raises questions about whether this is a genuine partnership or a cleverly disguised acquisition designed to avoid antitrust review .

What Exactly Did Nvidia Buy in This Deal?

On December 24, 2025, Nvidia announced the $20 billion licensing agreement with Groq, a company that had become one of the few credible competitors in AI inference hardware. The deal grants Nvidia non-exclusive rights to Groq's Language Processing Unit (LPU) technology, which is the specialized chip architecture that made Groq famous for delivering inference speeds roughly five to seven times faster than traditional graphics processing units (GPUs) .

Here is where the structure gets complicated. Groq technically remains an independent company under new CEO Simon Edwards, retains full ownership of its intellectual property, and continues operating GroqCloud, its inference-as-a-service platform. However, Nvidia hired away Groq's founder and CEO Jonathan Ross, president Sunny Madra, and the majority of Groq's engineering talent. This hybrid arrangement has drawn criticism from lawmakers who argue it resembles an "acqui-hire," a strategy where a company acquires another primarily to hire its talent while keeping the target nominally independent .

Why Does Groq's Technology Matter So Much That Nvidia Paid $20 Billion?

To understand the deal's significance, you need to grasp what makes Groq's LPU fundamentally different from Nvidia's own chips. Traditional GPUs, including Nvidia's H100 and Blackwell processors, store data in High Bandwidth Memory (HBM) located off the chip itself. Every time the processor needs information, it must reach out to external memory, creating a bottleneck that slows down inference, the process of running an AI model to generate responses .

Groq's LPU flips this architecture entirely by using on-chip SRAM (Static Random-Access Memory) as its primary working storage. This design delivers roughly 150 terabytes per second of memory bandwidth per chip, compared to about 22 terabytes per second for a traditional GPU. That is nearly seven times faster. The practical result is that Groq's chips can generate 500 to 750 tokens per second, the small chunks of text that make up AI responses, versus roughly 100 tokens per second on comparable GPU setups. Energy consumption also drops dramatically, from 10 to 30 joules per token on GPUs to just 1 to 3 joules per token on LPUs .

The tradeoff is memory capacity. Each LPU holds only 500 megabytes of SRAM compared to multiple gigabytes on a GPU. This means LPUs excel at the decode phase, where the model generates responses token by token, but struggle with the computationally intensive prefill and attention phases that happen earlier in the inference process. Rather than viewing this as a weakness, Nvidia recognized the complementarity and decided to integrate both architectures into a single system .

How Will Nvidia Actually Use Groq's Technology?

At Nvidia's GTC 2026 conference in March, CEO Jensen Huang unveiled the Groq 3 LPU, the first chip to emerge from the partnership. Manufactured by Samsung on a 4-nanometer process, the Groq 3 slots into Nvidia's Vera Rubin GPU platform as a dedicated decode-phase co-processor. The system uses what Nvidia calls Attention-FFN Disaggregation (AFD), a technical approach where Vera Rubin GPUs handle the prefill phase and full-context attention operations, while Groq 3 LPUs take over for the latency-sensitive feed-forward networks and operations where SRAM bandwidth dominance matters most .

The performance claims are striking. Nvidia stated that the LPX rack paired with a Vera Rubin NVL72 delivers 35 times higher inference throughput per megawatt than Blackwell NVL72 alone for trillion-parameter models, at a target price of $45 per million tokens. Samsung has already begun mass production on its 4-nanometer process, with shipments expected in the third quarter of 2026, just nine months after the deal announcement .

Steps to Understanding the Regulatory Concerns

  • Market Dominance Context: Nvidia controls roughly 90 percent of the GPU market, making it the overwhelming leader in AI hardware. Groq was one of the few credible competitors with genuinely faster and more energy-efficient technology for certain workloads, which is why regulators are concerned about consolidation.
  • The Licensing Loophole: Licensing deals are exempt from Hart-Scott-Rodino (HSR) premerger notification requirements, while traditional acquisitions are not. By structuring the deal as a license rather than an acquisition, Nvidia avoided the formal antitrust review process that would normally apply to a $20 billion transaction in a concentrated market.
  • The Talent Acquisition Problem: By hiring Groq's founder, CEO, and core engineering team, Nvidia may have effectively hollowed out the company's ability to compete independently. If Groq cannot function as a meaningful competitor without its leadership and engineers, the "independent company" argument becomes questionable in the eyes of regulators.
  • Practical Exclusivity Questions: Although the license is technically non-exclusive, meaning Groq could theoretically license its LPU technology to other companies, the deal's structure raises questions about whether this exclusivity exists in practice. If no other company actually licenses the technology, the arrangement functions as exclusive despite its legal language.

What Are Lawmakers Saying About This Deal?

On March 20, 2026, Senators Elizabeth Warren of Massachusetts and Richard Blumenthal of Connecticut sent a letter to Jensen Huang raising pointed questions about whether the Groq deal was deliberately structured to evade antitrust review. Their core argument centers on Nvidia's market dominance and Groq's status as one of the few credible competitors in AI inference hardware .

The senators set an April 3, 2026, deadline for Nvidia to respond to their questions and urged the Department of Justice and Federal Trade Commission to open formal investigations. This scrutiny sits alongside broader regulatory attention to Big Tech "acqui-hire" arrangements, where companies acquire startups primarily to hire talent while maintaining nominal independence. The FTC is already investigating similar structures in other tech deals, signaling that this arrangement has become a focus of antitrust enforcement .

The investigation highlights a fundamental tension in tech regulation. Nvidia argues that the deal is genuinely complementary, combining Groq's decode-phase expertise with Nvidia's prefill and training capabilities to create a better system. Critics counter that regardless of technical merit, the arrangement allows Nvidia to neutralize a competitor and consolidate its dominance without triggering formal antitrust review. The outcome of this investigation could reshape how tech companies structure partnerships and acquisitions in the AI era.