AI companies are strategically manipulating how they define artificial general intelligence (AGI) to serve their business interests, while the industry's profit-driven approach simultaneously creates existential risks and harms workers globally. According to Karen Hao, author of "Empire of AI" and contributing writer at The Atlantic, the lack of scientific consensus on human intelligence has left the field without clear goalposts, allowing companies to redefine AGI whenever it suits their narrative. How Are Companies Using AGI Definitions to Their Advantage? The ambiguity surrounding AGI definitions has become a strategic tool for major tech firms. Hao explained that because there is no scientific consensus on what human intelligence actually is, companies face no constraints when claiming progress toward AGI. This flexibility allows them to shift timelines, adjust expectations, and maintain investor confidence without delivering on promises. "There are no goalposts for this field and there are no goalposts for the industry. These companies can just use the term artificial general intelligence however they want to," stated Karen Hao, contributing writer at The Atlantic and author of "Empire of AI." Karen Hao, Contributing Writer at The Atlantic This definitional flexibility has real consequences. When companies control how AGI is defined, they also control the narrative around safety milestones, regulatory compliance, and public expectations. The strategic use of the term allows firms to claim breakthroughs that may not represent genuine progress toward human-level artificial intelligence. What Drives AI Development if Not Safety Concerns? Hao's analysis reveals that profit motives, not safety considerations, are the primary drivers of AI development decisions. The competitive pressure to accelerate research, combined with the enormous financial rewards of AI advancement, creates an environment where ethical considerations take a backseat to market dominance. The consequences of this profit-first approach extend far beyond corporate boardrooms. Current AI technologies are causing significant harm to people and society, yet these harms remain largely invisible to the public. The industry's focus on growth and market share has meant that the negative impacts of AI development receive minimal attention compared to the promised benefits. - Labor Exploitation: AI companies exploit workers by breaking traditional career ladders through cycles of layoffs and retraining, disrupting job security and economic stability for millions. - Unequal Benefits Distribution: The promised benefits of AI are concentrated in Silicon Valley and tech hubs, while communities outside these centers experience the harms without accessing the advantages. - Existential Risk Minimization: Companies downplay or ignore existential risks posed by advanced AI systems to maintain investor confidence and avoid regulatory scrutiny. Hao noted that the rhetoric of AI benefiting everyone breaks down when examined outside of Silicon Valley. The disparity between promises and reality reveals a fundamental disconnect between how the industry markets AI and how it actually impacts diverse global communities. Why Is Existential Risk Being Overlooked in the AGI Race? Despite the potential for catastrophic outcomes, existential risk remains marginalized in mainstream AI discussions. Hao emphasized that AI represents perhaps the most likely mechanism for destroying civilization, yet this concern receives far less attention than profit-focused narratives. "AI is probably the most likely way to destroy everything," warned Karen Hao, highlighting the urgency of safety discussions that remain sidelined by commercial interests. Karen Hao, Contributing Writer at The Atlantic The competitive dynamics between nations and companies create perverse incentives that discourage safety-first approaches. When one civilization or company believes that accelerating AI research will make them superior, the pressure to move faster intensifies. This race mentality pushes safety considerations to the margins, even as the stakes grow higher. Leadership decisions at major AI firms further illustrate how profit and strategic positioning override safety concerns. Internal dynamics at companies like OpenAI have been shaped by competitive anxieties and personal ambitions rather than principled commitments to safe AI development. These institutional pressures trickle down to influence which research gets funded, which safety measures get implemented, and which risks get acknowledged. What Would It Take to Realign AI Development With Safety? Addressing the misalignment between profit incentives and safety requires acknowledging that current market structures reward speed over caution. Companies that prioritize safety measures may face competitive disadvantages against those willing to cut corners. This creates a collective action problem where individual firms cannot unilaterally choose safety without risking market share. Understanding these dynamics is crucial for anyone trying to make sense of AI's trajectory. The gap between what companies say about AI safety and what they actually prioritize reveals the true drivers of development. Until the financial incentives change, or until external pressure forces accountability, the industry's profit-first approach will likely continue to shape how AGI is defined, pursued, and ultimately deployed. The stakes of this misalignment extend beyond corporate profits or individual job losses. They touch on fundamental questions about humanity's future and whether we can maintain meaningful control over increasingly powerful AI systems. Hao's analysis suggests that without structural changes to how AI development is incentivized and governed, the existential risks will remain underestimated and under-addressed.