Why Anthropic Just Recruited a Pharma CEO to Its Board

Anthropic has appointed Vas Narasimhan, the CEO of Swiss pharmaceutical giant Novartis, to its board of directors, making him the first executive from the pharmaceutical industry to join the AI startup's governing body. The appointment, made by Anthropic's Long-Term Benefit Trust, reflects a broader shift in how AI companies are building governance structures as their systems move closer to real-world applications in healthcare, finance, and other high-stakes domains .

Narasimhan joins a board that includes Anthropic cofounders Dario Amodei and Daniela Amodei, Netflix chairman Reed Hastings, Confluent CEO Jay Kreps, and former Microsoft executive Chris Liddell, who was added in February . With Narasimhan's appointment, Trust-appointed directors now hold a majority of the board, a structural detail that underscores Anthropic's commitment to independent oversight .

What Makes a Pharma CEO Valuable to an AI Company?

Narasimhan brings a rare combination of expertise that directly mirrors Anthropic's stated priorities. As Novartis CEO, he has overseen the development and approval of more than 35 novel medicines, a track record that required navigating some of the world's most stringent regulatory frameworks . Early in his career, he worked on public health initiatives targeting HIV/AIDS, malaria, and tuberculosis across India, Africa, and South America . He currently serves as an elected member of the US National Academy of Medicine and the Council on Foreign Relations, and sits on boards at the University of Chicago and Harvard Medical School .

"Vas brings something rare to our board. He has spent his career doing what we are trying to do with AI, taking powerful, complex technology and getting it to people safely at scale," said Daniela Amodei, cofounder and president of Anthropic.

Daniela Amodei, Cofounder and President of Anthropic

The parallel is deliberate. Just as Narasimhan has had to balance innovation speed with patient safety in drug development, Anthropic faces similar pressures as it scales Claude, its AI assistant, into real-world applications. The pharmaceutical industry operates under FDA (Food and Drug Administration) oversight, clinical trial requirements, and post-market surveillance systems that have evolved over decades to manage risk. Anthropic is attempting to build analogous safeguards for AI systems that could eventually influence medical research, drug discovery, and clinical decision-making .

How Is Anthropic Structuring Its Governance for Long-Term Accountability?

Anthropic's governance model is unusual in the AI industry. The company was founded as a public-benefit corporation, and its board operates under the oversight of the Long-Term Benefit Trust, an independent body whose members have no financial stake in the company . This structure is designed to insulate key decisions from short-term financial pressures as the company scales frontier AI systems, the most advanced models available .

The Trust's role is to ensure that Anthropic balances its obligations to shareholders with its public benefit mission of developing AI for the long-term benefit of humanity . Neil Shah, chair of the Long-Term Benefit Trust, explained the rationale for Narasimhan's appointment:

"The Long-Term Benefit Trust's role is to appoint directors who will ensure Anthropic responsibly balances its commitment to stockholders and its public benefit mission as the company grows. Vas has spent his career stewarding breakthrough science responsibly, exactly the perspective we are excited to have on the board as we develop consequential technology," Shah stated.

Neil Shah, Chair of Anthropic's Long-Term Benefit Trust

With Narasimhan's appointment, Trust-appointed directors now constitute a majority of the board, a shift that strengthens the independent governance layer . This structural change matters because it means that major decisions about how Claude is deployed, which applications receive priority, and how safety trade-offs are managed will be influenced by directors selected for long-term stewardship rather than shareholder returns alone .

Steps Anthropic Is Taking to Integrate Domain Expertise Into AI Governance

  • Recruiting Regulated-Industry Leaders: Anthropic is actively recruiting executives from highly regulated sectors like pharmaceuticals, finance, and defense to guide deployment decisions. Narasimhan's appointment is the second major board addition in recent months, following Chris Liddell's February appointment .
  • Emphasizing Safety Over Speed: Narasimhan emphasized in his own statement that "speed alone isn't the goal" in healthcare AI, and that "what matters just as much is how these tools are built, governed, and ultimately applied in the real world" . This philosophy aligns with Anthropic's stated focus on deployment discipline and controlled release of advanced models .
  • Building Governance Structures That Survive Scaling: The Long-Term Benefit Trust model is designed to maintain independent oversight even as Anthropic grows and potentially goes public. Anthropic is reportedly weighing an initial public offering that could occur as early as this year, making governance structures a critical priority .

Why Does This Matter Now?

AI systems are moving rapidly from research labs into domains where they can directly influence human health and safety. Narasimhan noted that "in healthcare, AI is already accelerating solutions to some of the hardest scientific challenges, from deepening our understanding of disease biology to designing better medicines" . But unlike software updates or consumer apps, errors in medical AI can have irreversible consequences. A misdiagnosis suggested by an AI system, a drug interaction missed by an algorithm, or a clinical trial design flaw introduced by an automated system could harm patients .

Narasimhan

The pharmaceutical industry has spent decades building regulatory frameworks, clinical trial protocols, and post-market surveillance systems to manage these risks. Narasimhan's appointment signals that Anthropic is attempting to import that institutional knowledge into AI governance. His experience navigating FDA approval processes, managing global regulatory compliance, and balancing innovation with safety provides a template that other AI companies may eventually follow .

Narasimhan himself framed the appointment as an opportunity to set standards: "Anthropic is setting the standard for how AI should be developed to benefit humanity, and I'm honored to join the Board and contribute to its mission" . That language suggests Anthropic views its governance model not just as an internal safeguard, but as a potential blueprint for how the broader AI industry should operate as systems become more powerful and consequential .