Dario Amodei, the theoretical physicist who founded Anthropic and created Claude, is the most publicly anxious AI leader about his own product's impact on the world. While other tech executives race to build more powerful systems, Amodei has become increasingly vocal about the existential questions his work raises: Will AI free humanity from drudgery or make human intelligence obsolete? Are these systems truly thinking, or sophisticated plagiarism machines? And if we're creating thinking machines, will they align with human values? In his 2024 chapbook "Machines of Loving Grace," Amodei wrote that artificial intelligence would soon be "smarter than a Nobel Prize winner across most relevant fields, biology, programming, math, engineering, writing, etc." When pressed by podcaster Lex Fridman in late 2024, he offered a specific timeline: "We'll get there by 2026 or 2027." That's not a distant future scenario. It's two to three years away. What Makes Anthropic Different From Other AI Companies? Anthropic has branded itself as the AI company with humanity in mind, drawing explicit red lines with the Pentagon over using its technology for mass surveillance and autonomous weapons. The company positions itself as the ethical counterweight to competitors racing without guardrails. But this moral stance has created a paradox: Claude, Anthropic's flagship AI model, has already been used in real-world scenarios that contradict the company's stated values, including involvement in operations to abduct Venezuela's political leader and target locations in Iran. This contradiction sits at the heart of Silicon Valley's AI ethics crisis. The people building these systems face competing pressures: investor demands for rapid scaling, competitive pressure from rivals like OpenAI and Google DeepMind, and genuine concerns about societal impact. One AI executive described the underlying motivation bluntly: "If the only three options are fame, power, or money, everyone's vying for power here. A lot of people will very explicitly say, 'I want to be in the room where it happened.'" How Are Economists Assessing AI's Impact on Jobs and Inequality? - Job Losses Already Occurring: Daron Acemoglu, the MIT economist and Nobel laureate, tells researchers that AI-driven job losses are already happening across industries, and tech companies have offered only vague commitments to universal basic income without concrete plans. - Systemic Inequality Risk: Automation "has broad negative social implications," according to Acemoglu, including increased inequality and loss of agency for workers displaced by technology. - Profit Motive Misalignment: Acemoglu points out that selling automation technologies to schools to cut teacher costs is profitable, but selling those same technologies while arguing schools need more teachers is not, creating perverse incentives that prioritize cost-cutting over educational quality. The trillion-dollar AI wave heading toward us puts livelihoods and the entire economy in the hands of technologists most people barely know and didn't vote for. Some are openly fretting over world-altering questions, while others are accelerating without pause. What Are the Three Competing Philosophies Shaping AI's Future? Silicon Valley has fractured into warring theological sects, each with radically different visions of AI's trajectory. Understanding these worldviews is essential to making sense of why Amodei and others are so anxious. - Accelerationists: Believe we're on the verge of solving every problem humanity has ever faced and are willing to break things, even civilization itself, in pursuit of that goal. - Doomers: Fret over rogue superintelligence, an artificial general intelligence (AGI) that accidentally eliminates humanity while fulfilling a mundane task like a paperclip optimization algorithm. - Skeptics: Argue this is all corporate hype chasing billions in investment while the actual technology sputters with hallucinations and errors, struggling with real-world problems. Gary Marcus, an NYU psychologist and professional AI critic, offered blunt advice to journalists covering the space: "Go talk to all of these people and then come back to me at the end and explain why they're all full of shit." That tension between genuine innovation and hype, between real breakthroughs and corporate marketing, defines the current moment. Amodei's anxiety appears rooted in his position between these camps. He's not an accelerationist willing to break civilization for progress, nor is he a pure skeptic dismissing AI's transformative potential. Instead, he seems trapped in the doomer's position, publicly articulating risks that his own company is racing to create. The question haunting Silicon Valley's tech co-ops, from Base Camp to the AGI House in Hillsborough, is whether anyone can actually steer this technology toward beneficial outcomes once it reaches the intelligence levels Amodei predicts are coming in 2026 or 2027.