NVIDIA is betting the next phase of artificial intelligence won't be about bigger models, but about smarter systems that can reason, act, and adapt in the real world. At GTC 2026, CEO Jensen Huang announced Vera Rubin, a new full-stack computing platform comprising seven chips, five rack-scale systems, and one supercomputer built specifically for agentic AI. The announcement signals a fundamental shift in how the AI industry thinks about computing infrastructure, moving beyond raw processing power toward systems optimized for AI agents that can operate autonomously in physical and digital environments. What Is Vera Rubin and Why Does It Matter? Vera Rubin represents NVIDIA's most ambitious attempt yet to vertically integrate hardware and software for AI workloads. Named after the astronomer who revealed the existence of dark matter, the platform includes the new NVIDIA Vera CPU and BlueField-4 STX storage architecture. Unlike previous generations of AI accelerators that focused on training massive language models, Vera Rubin is purpose-built for inference and reasoning at scale, addressing a critical bottleneck in deploying AI agents that need to make decisions in real time. The platform's architecture reflects what Huang calls "extreme codesign," a process where software and silicon are designed together from the ground up rather than as separate components. This approach has already made NVIDIA "the inference king," according to analyst descriptions cited by Huang, because it optimizes every layer of the computing stack simultaneously. Vera Rubin extends this philosophy further, treating the entire system as one integrated unit rather than a collection of separate pieces. How Does Vera Rubin Support the Shift Toward Agentic AI? The rise of AI agents represents a fundamental change in how artificial intelligence will be deployed in the real world. Unlike traditional AI models that respond to user queries, agents can autonomously plan, execute tasks, and adapt based on feedback. This requires different computational priorities than training or simple inference. Vera Rubin addresses these needs through several key architectural innovations: - Compute Optimization: The platform prioritizes test-time compute, allowing AI agents to spend more processing power on reasoning and decision-making at inference time rather than only during training. - Memory and Storage Integration: The BlueField-4 STX storage architecture ensures data moves efficiently across the entire system, critical for agents that need to access and process information rapidly. - Networking and Security: Built-in security and networking capabilities allow enterprises to safely deploy AI agents without exposing sensitive data or systems to unauthorized access. - Scalability: The five rack-scale systems and supercomputer design allow organizations to scale from small deployments to massive data center operations. Huang emphasized that Vera Rubin should be understood as "the entire system, vertically integrated, complete with software, extended end to end, optimized as one giant system". This holistic approach contrasts sharply with the modular, best-of-breed approach that has dominated enterprise computing for decades. Huang What Comes After Vera Rubin? NVIDIA is already looking beyond Vera Rubin to its next major architecture, called Feynman. This future platform will include the NVIDIA Rosa CPU, named for Rosalind Franklin, whose X-ray crystallography revealed the structure of DNA. Rosa is designed to move data, tools, and tokens efficiently across the full stack of agentic AI infrastructure, addressing the fundamental challenge of keeping data flowing smoothly through increasingly complex systems. The Feynman generation will pair Rosa with the LP40, NVIDIA's next-generation LPU (learning processing unit), along with BlueField-5 and CX10 networking components. These will be connected through NVIDIA Kyber for both copper and co-packaged optics scale-up, and NVIDIA Spectrum-class optical scale-out. Together, Feynman advances every pillar of what NVIDIA calls the "AI factory": compute, memory, storage, networking, and security. How Is NVIDIA Preparing Enterprises to Deploy These Systems? Recognizing that building and deploying new AI infrastructure is complex and risky, NVIDIA announced the Vera Rubin DSX AI Factory reference design and the NVIDIA Omniverse DSX Blueprint. These tools allow companies to simulate AI factories in software before building them in the physical world, reducing the risk and cost of deploying new infrastructure. DSX Air, part of the broader DSX platform, lets organizations test configurations, workloads, and scaling strategies virtually before committing capital to physical hardware. This simulation-first approach addresses a real pain point for enterprises. Building a data center costs hundreds of millions of dollars and takes years to complete. Being able to validate architectural decisions in software before breaking ground on physical infrastructure could save organizations from costly mistakes and accelerate time to deployment. What Does This Mean for AI's Revenue Potential? Huang's projection of $1 trillion in revenue from 2025 through 2027 reflects his confidence that AI infrastructure spending will accelerate dramatically. He noted that computing demand for NVIDIA GPUs is "off the charts" and that he believes overall computing demand has increased by 1 million times over the last few years. This staggering growth rate suggests that AI is not a niche market but a fundamental shift in how computing itself is organized. The $1 trillion projection encompasses not just NVIDIA's revenue, but the broader market for AI infrastructure, software, and services. It reflects investments from cloud providers like AWS, Microsoft Azure, and Google Cloud, as well as enterprise customers building their own AI infrastructure. NVIDIA's role as the primary supplier of the chips and platforms powering this infrastructure positions the company to capture a significant portion of this spending. How Does OpenClaw Fit Into NVIDIA's Agentic AI Strategy? Beyond hardware, NVIDIA is investing heavily in the software ecosystem for AI agents. Huang spotlighted OpenClaw, an open source project from developer Peter Steinberger, calling it "the most popular open source project in the history of humanity". OpenClaw provides the operating system for agentic computers, allowing developers to pull down the software, stand up an AI agent, and begin extending it with tools and context using a single command. "Every single company in the world today has to have an OpenClaw strategy," stated Jensen Huang, CEO of NVIDIA. Jensen Huang, CEO at NVIDIA To ensure OpenClaw can be deployed securely inside enterprises, NVIDIA introduced the NVIDIA OpenShell runtime and the NVIDIA NemoClaw stack, which combine policy enforcement, network guardrails, and privacy protections. This layered approach acknowledges that while open source software is powerful, enterprises need guardrails to ensure it operates safely within their security and compliance requirements. Why Is NVIDIA Taking AI to Space? In a surprising announcement, Huang revealed that NVIDIA is going to space. Future systems like NVIDIA Space-1 Vera Rubin are being designed to bring AI data centers into orbit, extending accelerated computing from Earth to space. While this may sound like science fiction, it reflects a serious recognition that as AI workloads grow exponentially, terrestrial data centers may not be sufficient to meet demand. Space-based infrastructure could offer advantages in cooling, power efficiency, and latency for certain applications. The announcement also underscores NVIDIA's ambition to position itself not just as a chip company, but as the foundational infrastructure provider for the AI era. By thinking decades ahead about where computing will need to happen, NVIDIA is signaling its commitment to remaining relevant as AI evolves. What Should Organizations Do Now? For enterprises and AI developers, the announcements at GTC 2026 suggest several immediate priorities. Organizations should begin evaluating their AI infrastructure strategy with agentic AI in mind, not just traditional model training and inference. They should also explore NVIDIA's simulation tools and reference designs to understand how Vera Rubin and future platforms might fit into their long-term computing roadmaps. Finally, companies should develop an OpenClaw strategy, as Huang emphasized, to ensure they can safely deploy AI agents as the technology matures. The shift toward agentic AI and platforms like Vera Rubin represents a maturation of the AI industry. The era of simply scaling up models is giving way to an era of building systems that can reason, act, and adapt. NVIDIA's $1 trillion revenue projection reflects confidence that this transition will drive unprecedented infrastructure spending. For organizations building AI systems, the message is clear: the next phase of AI requires rethinking not just software, but the entire computing stack from silicon to security.