Artificial general intelligence (AGI) remains one of the most hotly debated concepts in technology, yet there is no agreed-upon definition of what it actually is. NVIDIA CEO Jensen Huang recently stated that the company has "achieved AGI," but his claim highlights a fundamental problem: different companies define AGI in completely different ways, making it nearly impossible to know when or if the milestone has truly been reached. Why Can't Tech Leaders Agree on What AGI Is? The core issue is that AGI, which refers to artificial intelligence that can understand and perform any intellectual task that humans are capable of, lacks a universally accepted definition. This ambiguity has allowed companies to set their own goalposts, sometimes making it easier to claim they've reached the milestone. Different organizations measure AGI in strikingly different ways. Google DeepMind defines AGI as "an AI that has at least the same capabilities as a skilled adult in most cognitive tasks," but this definition leaves critical questions unanswered: what exactly constitutes a "skilled adult," and how broad should "most cognitive tasks" be? Meanwhile, Microsoft and OpenAI have taken a financial approach, agreeing that "AGI will be achieved when OpenAI develops an AI that generates 15 trillion yen in profits". This approach treats AGI as an economic milestone rather than a technical one. What Did Jensen Huang Actually Say About AGI? On March 23, 2026, NVIDIA CEO Jensen Huang appeared on computer scientist Lex Fridman's podcast and stated, "I think we have achieved AGI." However, the context matters significantly. Fridman had defined AGI in the conversation as "basically an AI system that can do your job" and suggested that "AGI can start, grow, and run successful technology companies worth more than $1 billion." Huang was responding to this specific definition when he claimed AGI had already arrived. "I think it's now. I think we have already achieved AGI," stated Jensen Huang, CEO of NVIDIA. Jensen Huang, CEO at NVIDIA However, Huang's optimism came with a significant caveat. He acknowledged that even if AI reaches AGI by these measures, what it can build may only be fleeting trends. He noted that while AI agents could create billions of dollars in sales through countless small applications, "the possibility of 100,000 such AI agents building a company like NVIDIA is 0%." This suggests that achieving AGI in one narrow sense doesn't necessarily mean AI can replicate the sustained innovation required to build transformative companies. How Are Other Tech Leaders Responding to the AGI Debate? The disagreement over AGI definitions extends beyond NVIDIA. Sam Altman, CEO of OpenAI, has taken a different stance, predicting in December 2024 that "AGI will arrive sooner than most people around the world think, and its importance will be far less." This statement suggests that Altman believes the milestone may arrive quietly and have less transformative impact than many expect. Sam Altman, CEO of OpenAI The lack of consensus has drawn criticism from technology observers. On Hacker News, some users criticized Huang's use of the term AGI, arguing that he was simply referring to AI agents that can automate tasks, not true artificial general intelligence. Others pointed out that podcast host Lex Fridman often uses exaggeration to attract attention, and suggested that Huang was merely stating that the specific AGI goals Fridman had defined in their conversation had already been met. Steps to Understanding the AGI Definition Problem - Recognize the Financial Definition: Microsoft and OpenAI define AGI through a 15 trillion yen profit threshold, treating it as an economic milestone rather than a technical achievement, which differs fundamentally from capability-based definitions. - Understand Capability-Based Approaches: Google DeepMind's definition focuses on matching skilled adult performance across cognitive tasks, but leaves ambiguous what "skilled" means and which tasks count as "most" cognitive tasks. - Consider Task-Specific Definitions: Lex Fridman's definition in the podcast focused on whether AI can do your job or run billion-dollar companies, which is narrower than the broader concept of matching all human intellectual capabilities. - Evaluate Claims Critically: When tech leaders announce AGI achievements, examine which specific definition they're using, as the claim may only apply to one narrow interpretation rather than the broader concept of artificial general intelligence. The fundamental problem is that massive investments are being made in AGI development despite this definitional ambiguity. Companies like Microsoft have invested over 100 billion yen in OpenAI specifically for AGI development, and Google co-founder Sergey Brin has stated that "if we work on development for 60 hours a week in the office, Google can develop AGI and lead the industry." Yet without agreement on what AGI actually is, it's unclear whether these investments are targeting the same goal. The AGI definition crisis reveals a deeper issue in artificial intelligence development: the field's most ambitious goal remains poorly defined. Until the industry reaches consensus on what AGI means, claims of achievement will continue to spark debate rather than celebration. For investors, policymakers, and the public trying to understand AI's trajectory, this lack of clarity makes it difficult to assess whether the technology is truly approaching its ultimate potential or simply being rebranded to match whatever milestone companies have already achieved.