The Uncontrollable AI Problem: Why Tech Giants May Be Building Something They Can't Stop
Tech industry insiders are raising alarms about a fundamental problem: companies are building increasingly powerful AI systems without corresponding safeguards to control them, and global competition is making the situation worse. According to Tristan Harris, co-founder of the Center for Humane Technology, the current trajectory resembles an uncontrolled race where the pursuit of artificial general intelligence (AGI), or AI systems matching human-level reasoning across all domains, has become disconnected from the wisdom needed to deploy it safely .
What Exactly Are Tech Leaders Worried About?
Harris, who previously worked as a design ethicist at Google and gained prominence through the documentary "The Social Dilemma," has spent the last decade studying how technology shapes human behavior. His concerns about AI represent a natural evolution from his earlier work on social media manipulation. In January 2023, he began receiving calls from researchers inside major AI laboratories expressing serious concerns about the direction of their work .
The core worry centers on what Harris describes as building "a super-intelligent god entity" that may soon exceed human ability to control it. Unlike previous technological risks, which operated within predictable physical constraints, AI systems present a novel problem: they improve themselves through machine learning, meaning their capabilities can expand in ways their creators didn't anticipate or plan for .
Harris draws a parallel to bridge engineering to illustrate the problem. Just as civil engineers understand the physics of structural failure, technologists should understand the science of how AI systems behave. Yet the current industry approach treats AI development as though it operates outside normal cause-and-effect relationships, he argues .
How Is Competition Accelerating the Risk?
One of Harris's most pressing concerns involves what he calls a "race to the bottom" driven by global competition, particularly with China. When multiple organizations compete to build the most capable AI system first, the incentive structure shifts away from safety toward speed. Companies face pressure to deploy systems before fully understanding their implications, creating what Harris describes as a perilous dynamic where safety considerations become secondary to competitive advantage .
This competitive pressure creates a coordination problem: even if one company wanted to slow down and implement stronger safety measures, doing so would put them at a disadvantage relative to competitors willing to move faster. The result is a tragedy-of-the-commons scenario where individual rational decisions by companies lead to collectively irrational outcomes for society .
Steps to Address the AI Control Problem
- Develop Self-Improving Governance: Harris argues that rather than building "digital bunkers" or isolated safety measures, society needs governance systems that can evolve as quickly as AI technology itself, creating feedback loops that keep pace with 21st-century innovation.
- Shift Focus from Technology Neutrality: The tech industry must abandon the assumption that technology is neutral and acknowledge that design choices have predictable psychological and social consequences, similar to how bridge engineering has predictable physical consequences.
- Prioritize Wisdom Alongside Intelligence: Building more capable AI systems without corresponding advances in wisdom about how to deploy them safely creates an asymmetry that Harris identifies as fundamentally dangerous to human flourishing.
Harris's perspective draws from his earlier work on social media ethics, where he observed how design choices by a small number of engineers in San Francisco reshaped the psychological environment for billions of people. He noted that in 2012 and 2013, when Instagram was being developed, only a handful of designers understood how their choices around features like infinite scroll, autoplay, and notifications would affect human attention and behavior .
"Never before in history have 50 designers in San Francisco basically, through their choices, rewired the entire psychological habitat of humanity. And we need to get this right. We have a moral responsibility to get this right," Harris stated in a presentation he made at Google that circulated to hundreds of employees.
Tristan Harris, Co-founder of the Center for Humane Technology
That same dynamic now applies to AI development, but with higher stakes. The decisions being made by researchers and engineers at AI labs today will shape not just human attention, but potentially human agency and autonomy itself. Harris emphasizes that technology is not inevitable or neutral; it results from deliberate human choices about how systems should work .
The challenge Harris identifies is that the current incentive structure in AI development does not reward caution or wisdom. Companies that move fastest gain market share and influence. Those that pause to consider safety implications risk falling behind. This creates a systematic bias toward risk-taking that Harris argues is incompatible with the stakes involved in developing superintelligent systems .
Harris's call for "self-improving governance" suggests that regulatory frameworks and safety practices need to evolve as rapidly as AI capabilities themselves. Static regulations, no matter how well-intentioned, will become obsolete as AI systems become more capable. Instead, he advocates for adaptive governance structures that can respond to emerging risks in real time, similar to how immune systems adapt to new pathogens .
The conversation between Harris and podcast host Chris Williamson highlights a critical gap in current AI discourse: while much public attention focuses on specific AI harms like bias or misinformation, Harris argues that the deeper existential risk stems from the fundamental loss of human control over increasingly autonomous systems. This is not a problem that can be solved through better content moderation or algorithmic transparency alone; it requires rethinking how AI systems are developed and deployed from the ground up .