The AI Regulation Divide: Why State Laws and Long-Term Safety Are on a Collision Course

The challenge of governing artificial intelligence has split into two competing visions: those writing state laws today and those planning for transformative AI decades ahead. At Harvard's Berkman Klein Center, two leading voices in AI policy are preparing to discuss this fundamental tension, revealing a gap that could shape how the world manages one of its most consequential technologies .

Why Are States and Long-Term Thinkers at Odds Over AI Regulation?

Nathan Calvin, General Counsel and Vice President of State Affairs at Encode, has spent the past year shaping frontier AI legislation in state legislatures. He was instrumental in California's SB 53, the Frontier AI Transparency Act, and has led efforts to scrutinize OpenAI's nonprofit restructuring. His work focuses on immediate, actionable policies that can be implemented now .

Meanwhile, Fin Moorhouse, a Research Fellow at Forethought, approaches AI governance from a different angle. Rather than focusing on current regulations, he examines what rapid AI progress could mean over the coming century. His work explores how far AI capabilities might advance beyond today's frontier and what an "intelligence and industrial explosion" could mean for society .

The tension between these approaches reveals a critical gap in how governments are thinking about AI. State-level regulations like SB 53 address transparency and safety concerns that exist right now. But they may not account for scenarios where AI capabilities advance far beyond current expectations, raising questions about whether today's regulatory frameworks will remain relevant or sufficient.

What Are the Key Battlegrounds in AI Policy Right Now?

Calvin's work highlights several pressing policy fights that are reshaping AI governance in real time. These include:

  • Frontier AI Transparency: California's SB 53 requires developers to disclose safety testing and capabilities of advanced AI systems before deployment, setting a precedent for transparency-focused regulation.
  • Corporate Governance Scrutiny: Encode has challenged OpenAI's nonprofit restructuring, questioning whether the company's governance structure adequately protects the public interest as AI capabilities grow.
  • State Versus Federal Authority: A major battle is emerging over whether states can regulate AI independently or whether federal law should preempt state-level rules, potentially creating a patchwork of inconsistent standards.
  • The RAISE Act: This proposed legislation represents another front in state-level AI governance, though specific details of its provisions remain a focus of ongoing policy debate.

These battles are not abstract. They determine whether AI companies face consistent rules across the country or navigate a fragmented landscape of state-by-state requirements. They also shape whether transparency and safety testing become industry standards or remain optional practices .

How Can Policymakers Bridge the Gap Between Present and Future AI Risks?

The conversation between Calvin and Moorhouse suggests that effective AI governance requires both immediate action and long-term vision. Here are the key approaches emerging from this debate:

  • Build Flexibility Into Current Rules: State regulations like SB 53 should be designed to adapt as AI capabilities evolve, rather than locking in assumptions about what AI can or cannot do.
  • Establish Baseline Safety Standards Now: Transparency requirements and safety testing protocols implemented today create infrastructure that can scale as AI systems become more powerful.
  • Coordinate Across Jurisdictions: Rather than allowing a patchwork of conflicting state laws, policymakers should work toward harmonized standards that prevent regulatory arbitrage while preserving state innovation.
  • Engage with Long-Term Research: Policymakers writing laws today should consult researchers studying advanced AI scenarios, ensuring that current regulations don't inadvertently create blind spots for future risks.

"What does it take to govern a technology that might reshape the world within the decade? Answering that requires both big-picture thinking about where AI is heading and close engagement with the policy fights shaping it in the present," stated the Berkman Klein Center in describing the event.

Berkman Klein Center, Harvard University

This framing captures the core challenge: policymakers cannot afford to wait for perfect long-term understanding before acting, but they also cannot ignore what advanced AI progress might mean for governance frameworks built today .

What Makes This Moment Critical for AI Governance?

The timing of this policy debate matters enormously. AI capabilities are advancing rapidly, and the regulatory decisions made in 2026 will shape how the technology develops for years to come. If states like California establish strong transparency and safety standards now, those practices may become industry norms. If federal preemption prevents state-level innovation, the entire regulatory landscape could shift toward lighter-touch oversight.

Calvin's focus on state-level action reflects a belief that immediate, concrete rules are necessary to address current harms and risks. Moorhouse's long-term perspective suggests that today's regulations must be robust enough to handle scenarios where AI capabilities expand far beyond current expectations. Neither approach is wrong; the real challenge is ensuring they inform each other .

The Harvard event brings these perspectives together precisely because the field of AI governance cannot afford to choose between present-focused regulation and future-focused planning. Both are essential. The question is how to design policies that address immediate safety concerns while remaining flexible enough to adapt as our understanding of AI's trajectory evolves.