AI companies are racing to build artificial general intelligence (AGI), a system that could match or exceed human intelligence across all domains, but the competitive pressure is driven by profit motives rather than safety concerns, according to investigative journalist Karen Hao. In a detailed discussion of her book "Empire of AI," Hao challenges the narrative that AI development benefits everyone equally, pointing instead to systemic exploitation, global inequality, and the strategic manipulation of how companies define AGI itself. What Is Sam Altman's Real Influence Over OpenAI's Direction? Sam Altman, the CEO of OpenAI, has become a polarizing figure whose influence extends beyond public statements into the internal decision-making structures of the organization. According to Hao's reporting, Altman played a direct role in shaping OpenAI's leadership dynamics, particularly regarding concerns about Elon Musk's involvement. "Altman then appealed personally to Greg Brockman and said don't you think that it would be a little bit dangerous to have Musk be the CEO of this company," explained Karen Hao, contributing writer at The Atlantic and author of "Empire of AI." Karen Hao, Contributing Writer at The Atlantic This behind-the-scenes influence reveals how personal and strategic concerns shape the leadership of major AI organizations. Hao notes that Altman's perception among stakeholders depends heavily on whether they align with his vision for AI's future. Those who support his direction view him as an invaluable leader, while those who disagree may feel manipulated by his strategic positioning. How to Evaluate AI Companies' Claims About Artificial General Intelligence? One of the most critical issues Hao raises is the lack of scientific consensus on what human intelligence actually is, which creates a fundamental problem for defining AGI. This ambiguity gives companies enormous flexibility in how they use the term strategically. - Lack of Scientific Consensus: There is no agreed-upon definition of human intelligence, making it impossible to establish clear benchmarks for what AGI should achieve or when it has been reached. - Strategic Manipulation: Companies can define AGI however they want to suit their interests, allowing them to claim progress toward AGI without meeting any objective standard. - Regulatory Flexibility: The ambiguity in AGI definitions allows companies to shape regulatory discussions and public perception based on their own framing rather than scientific evidence. - Moving Goalposts: Without established goalposts for the field, companies can continuously redefine what AGI means as they develop new capabilities, making it nearly impossible to hold them accountable. "There are no goalposts for this field and there are no goalposts for the industry," noted Karen Hao. "These companies can just use the term artificial general intelligence however they want." Karen Hao, Contributing Writer at The Atlantic Why Are AI Workers Facing Exploitation Despite Industry Growth? While AI companies promote their technologies as transformative and beneficial, the human cost of AI development remains largely hidden from public view. Hao's research reveals a troubling pattern of labor exploitation that disrupts traditional career paths and job security. The AI industry creates a cycle where workers are laid off and then retrained to support new AI models. This pattern breaks what Hao calls "the career ladder," preventing workers from building stable, long-term careers. The companies that profit enormously from AI development are simultaneously the ones extracting extraordinary amounts of labor from workers, often at lower wages and with less job security than traditional tech roles. "They exploit an extraordinary amount of labor which breaks the career ladder," stated Karen Hao. "The current production of these technologies right now is exacting a lot of harm on people." Karen Hao, Contributing Writer at The Atlantic This exploitation is not limited to the United States. The benefits of AI are concentrated in Silicon Valley and other tech hubs, while the harms are distributed globally. Workers in developing nations often bear the burden of training AI models through low-wage data labeling and annotation work, while the profits flow to wealthy tech companies and investors. Where Do AI's Benefits Actually Go? The rhetoric surrounding AI emphasizes universal benefits and shared prosperity, but this narrative breaks down when examined outside of Silicon Valley. Hao's reporting shows that the promised advantages of AI are not equally distributed across different regions and communities. "You really start to see that rhetoric break down when you go to places that look nothing like Silicon Valley," explained Karen Hao. Karen Hao, Contributing Writer at The Atlantic This disparity highlights a fundamental problem with how AI development is currently structured. The companies driving AI advancement are primarily motivated by profit, which means they focus on markets and applications that generate the highest returns. Communities outside wealthy tech hubs, developing nations, and marginalized populations are often left behind, receiving neither the benefits of AI advancement nor meaningful input into how these technologies are developed. What Are the Existential Risks That AI Companies Downplay? Beyond labor exploitation and unequal distribution of benefits, Hao raises perhaps the most urgent concern: the existential risks posed by AI development. While companies like OpenAI emphasize safety research, the competitive pressure to develop AGI first may override safety considerations. "AI is probably the most likely way to destroy everything," warned Karen Hao. Karen Hao, Contributing Writer at The Atlantic The race to develop AGI, driven by profit motives and national competition, creates perverse incentives that may prioritize speed over safety. Companies that slow down to address safety concerns risk being overtaken by competitors who move faster. This dynamic, combined with the lack of clear definitions and goalposts for AGI, means that the industry is essentially running an uncontrolled experiment with potentially catastrophic consequences. Hao's work suggests that understanding AI's future requires looking beyond the optimistic narratives promoted by tech leaders. It demands examining the profit motives driving development, the labor exploitation sustaining the industry, the unequal distribution of benefits, and the existential risks that remain largely unaddressed. For anyone seeking to understand AI's true impact on society, these uncomfortable truths are essential context.