How Silicon Valley's AI Safety Philosophy Became a Billionaire-Funded Movement

Longtermism, a philosophy that prioritizes existential risks from artificial general intelligence (AGI) over immediate global suffering, originated in Silicon Valley AI research communities rather than academic philosophy, according to a new analysis of the movement's intellectual history. The narrative that longtermism emerged as a natural extension of effective altruism obscures how AI researchers shaped the worldview that now influences billions in philanthropic funding and tech policy .

What Is Longtermism and Why Does Its Origin Story Matter?

Longtermism is the philosophical view that humanity stands at a critical juncture where we could either annihilate ourselves or advance toward what proponents call a "radiant future." This future, according to longtermist thinkers, would involve vast numbers of humans' digital descendants living for billions of years by colonizing galaxies. The movement argues that preventing existential threats, particularly from misaligned superintelligent machines, should be humanity's top moral priority .

The conventional account traces longtermism to Oxford philosophers Toby Ord and William MacAskill, who developed it as an extension of effective altruism (EA), a charitable movement founded in 2011 focused on making philanthropic work maximally effective. However, this narrative omits crucial context: longtermism's intellectual roots run directly through AI safety research communities that predate EA by years .

How Did AI Researchers Shape Longtermism Before It Had a Name?

The philosophical lineage traces back to Eliezer Yudkowsky, cofounder of the Berkeley-based Machine Intelligence Research Institute (MIRI), who started blogs devoted to rationalism during the 2000s. Rationalism, in this context, aimed to improve human reasoning and decision-making so that intelligent people could create and control advanced machines. Yudkowsky posted about "Effective Altruism" in 2007, four years before Ord and MacAskill formally named their movement and two years before they founded Giving What We Can .

Toby Ord, who became a central longtermist figure, was active on Yudkowsky's blog LessWrong by 2009 and collaborated with philosopher Nick Bostrom starting in 2006. Bostrom had founded Oxford's Future of Humanity Institute (FHI) in 2005, partly to study AGI risks. While effective altruism itself wasn't explicitly AI-focused, its fundraising success and philosophical framework derived partly from discussions about AGI-related topics that would later preoccupy longtermists .

Bostrom provided the first formal articulation of longtermism's core ideas. As a graduate student, he encountered AI-oriented transhumanism through an email list managed by "Extropians," a group of modern transhumanists who believed genetic engineering, artificial intelligence, and molecular nanotechnology would enable humanity to transcend the human condition. In 1998, Bostrom cocreated the World Transhumanist Association and began reformulating transhumanist ideas for academic philosophers .

Key Intellectual Milestones in Longtermism's Development

  • 2002: Bostrom introduced the concept of "existential risk," defined as threats to humanity's potential to develop into the posthuman future envisioned by transhumanists.
  • 2003: Bostrom published "Astronomical Waste," presenting a utilitarian argument that space colonization and population expansion are so valuable that reducing risks to this future should be humanity's top moral priority.
  • 2014: Bostrom's book "Superintelligence" argued that nonaligned superintelligent machines represent the biggest existential risk, receiving endorsements from Bill Gates, Elon Musk, and OpenAI's Sam Altman.
  • 2017: MacAskill coined the term "longtermism" to describe this EA-related view championed at Bostrom's FHI and Ord's institution.
  • 2020-2022: Ord and MacAskill published high-profile books on longtermism that omitted all mention of transhumanism, reframing the movement as emerging from utilitarian philosophy rather than Silicon Valley AI circles.

How Billionaire Funding Transformed Academic Credibility Into Policy Influence

Tech leaders embraced longtermism enthusiastically once the philosophy gained academic legitimacy. Elon Musk tweeted in 2022 that Bostrom's "Astronomical Waste" was "likely the most important paper ever written" and declared MacAskill's "What We Owe the Future" "a close match for my philosophy." This adulation flowed toward researchers at institutes funded by AI billionaires .

The funding infrastructure reveals how intellectual credibility translates into influence. Skype cofounder Jaan Tallinn helped establish Cambridge University's Centre for the Study of Existential Risk and the longtermist Future of Life Institute (FLI). Elon Musk contributed $14 million to FLI, while Ethereum cocreator Vitalik Buterin gave the organization $650 million in 2021. These think tanks leverage their university affiliations to shape global policy conversations .

The relationship between longtermism and cryptocurrency wealth became public during the 2022 FTX collapse. Sam Bankman-Fried, the crypto exchange's CEO, followed MacAskill's advice to "earn to give," accumulating wealth specifically to fund effective altruism and longtermist organizations. MacAskill advised FTX's charitable fund, which pledged millions to EA and longtermist organizations MacAskill led and advised. When Bankman-Fried was convicted of fraud and conspiracy in 2023, the scandal exposed how longtermist organizations had become dependent on cryptocurrency wealth .

Why Does the Origin Story Matter for AI Safety Discussions?

The rebranding of longtermism obscures important questions about whose values shape AI development priorities. By presenting longtermism as emerging from utilitarian philosophy rather than Silicon Valley transhumanism, proponents made the movement more palatable to academic and policy audiences. However, this narrative conceals that the movement's core concern, preventing AGI misalignment, originated in AI researcher communities rather than from broader philosophical inquiry .

This matters because longtermism now influences how billions in funding flow toward AI safety research, which problems receive attention, and which voices shape AI governance discussions. When the movement's intellectual origins in AI circles and transhumanist visions of digital posthumanity remain hidden, policymakers and the public cannot fully evaluate whether longtermism's priorities align with broader human values or represent a narrow subset of Silicon Valley interests .

The philosophy's emphasis on existential risks from superintelligent machines has also shaped how AI companies frame their own safety work. By accepting longtermism's framing that AGI misalignment is humanity's most pressing moral concern, tech leaders can position themselves as working on civilization-scale problems while their current systems cause documented harms through environmental damage, labor exploitation, and social injustice .