Nvidia CEO Jensen Huang recently claimed that artificial general intelligence, or AGI, has already been achieved, but his definition of this milestone looks radically different from what most technologists have long imagined. Speaking on Lex Fridman's podcast, Huang suggested that AI systems like OpenAI's Claude can now accomplish what many consider a hallmark of AGI: creating and running profitable billion-dollar businesses. This redefinition matters because it shifts the conversation away from abstract measures of intelligence and toward concrete economic output. What Does Huang Actually Mean by 'AGI Achieved'? When Fridman asked Huang whether it would take five to 20 years for AI to innovate, find customers, and manage a team to build a billion-dollar company, Huang responded that this milestone is not a distant goal but something already possible. His reasoning centers on the idea that AI models could generate a viral web service or app that attracts billions of users, generates revenue, and then disappears, much like many internet startups did during the dot-com era. Huang illustrated his point with a specific example: "It is not out of the question that a Claude model was able to create a web service, some interesting little app that all of a sudden, you know, a few billion people used for 50 cents, and then it went out of business again shortly after," he explained. He noted that most websites from the internet boom were not more sophisticated than what current AI systems like OpenAI or Claude could generate today. This framing is notably different from how the tech industry has traditionally discussed AGI. For decades, researchers and executives have debated whether AGI means passing human-level tests, writing novels, or demonstrating reasoning abilities that match human cognition. Huang's metric is strictly capitalistic: the ability to build and operate a 10-figure enterprise, even if that success is temporary. Why This Definition Matters for Tech Companies and Investors Huang's redefinition of AGI carries significant implications for how the industry views AI's current capabilities and future trajectory. By declaring AGI "achieved," he conveniently reinforces the necessity of Nvidia's own products. If AGI is already here, the argument goes, then demand for Nvidia's high-end chips becomes critical as major tech companies like Google and Microsoft scale up their data centers to meet AI infrastructure needs. However, investors and industry observers should approach this claim with healthy skepticism. The definition leans heavily on monetizing temporary virality rather than demonstrating sustained institutional management or the broad, humanlike reasoning that the industry has traditionally associated with true AGI. A model generating a profitable app represents a narrow type of success, not the comprehensive intelligence transformation many expected. How to Evaluate AI's Real-World Capabilities Today - Task-Specific Performance: Current AI systems excel at narrow, well-defined tasks like writing code, generating marketing copy, or answering questions, but struggle with complex, multi-step problems requiring sustained reasoning over long periods. - Economic Viability: AI can create profitable short-term ventures and generate revenue through specific applications, but building and managing complex organizations like hardware companies remains beyond current capabilities. - Human Oversight Requirements: Even advanced AI systems require human engineers, managers, and decision-makers to function effectively in real-world business environments, indicating they have not achieved full autonomy. Huang himself acknowledged these limitations when discussing his own company. Even if an AI agent catches a trend or creates a digital influencer that generates a billion dollars, it is not ready to replace the engineers of a complex hardware giant like Nvidia. "The odds of 100,000 of those agents building Nvidia is zero percent," he stated. "The number of software engineers at Nvidia is gonna grow, not decline. And the reason for this is because the purpose of a software engineer and the task of a software engineer coding are related, not the same. I wanted my software engineers to solve problems. I didn't care how many lines of code they wrote," said Jensen Huang. Jensen Huang, CEO at Nvidia This admission reveals a crucial gap in Huang's AGI claim. While AI can generate code and execute specific tasks, the strategic problem-solving that defines engineering leadership remains distinctly human. Nvidia plans to grow its software engineering workforce, not shrink it, because the company needs humans who can think beyond immediate coding tasks and solve novel, complex problems. The Broader Debate Over AGI's Definition The tech industry has long struggled to define AGI precisely, with debates often hinging on human-centric tests or specific benchmarks. Some researchers point to performance on standardized tests, others to the ability to learn new tasks without retraining, and still others to general reasoning capabilities that match human intelligence across diverse domains. Huang's capitalistic redefinition adds a new dimension to this conversation, but it also highlights how malleable the term has become. By shifting the goalpost from abstract intelligence measures to economic output, Huang makes AGI sound more achievable while simultaneously making it less meaningful as a measure of true artificial general intelligence. A system that can generate a viral app is impressive, but it does not necessarily demonstrate the kind of flexible, generalizable reasoning that most researchers associate with genuine AGI. The stakes of this definitional debate are high. If companies and investors accept Huang's framing, it could accelerate investment in AI infrastructure and applications. If skeptics prevail, the industry may continue to view current AI systems as powerful but narrow tools rather than the beginning of a new era of machine intelligence. For now, Huang's claim serves as a useful reminder that how we define progress in AI often depends less on technical breakthroughs and more on what we choose to measure.