The White House's AI Gamble: Why States Are Fighting Back Against Federal Preemption

The White House released a comprehensive AI governance framework in March that prioritizes federal control over state-level AI regulation, proposing to preempt state laws governing AI development, developer liability, and AI usage restrictions. The framework has already gained support from influential Republicans in Congress, including House Speaker Mike Johnson and Senator Ted Cruz, but it faces resistance from states that have already enacted their own AI safeguards .

What Does the White House Want to Block?

The framework targets three specific categories of state AI laws for federal preemption. First, it seeks to prevent states from regulating the AI development process itself, such as California's Senate Bill 53, which requires large AI companies to publish and comply with their own "frontier AI framework" describing their approach to risk management, cybersecurity, and built-in safeguards .

Second, the White House recommends legislation ensuring states cannot "penalize AI developers for a third party's unlawful conduct involving their models." This directly challenges laws like Colorado's AI Act, which creates a duty of care for AI developers whose systems make consequential decisions, requiring them to protect consumers from discrimination risks. California also passed a law stating that in civil cases alleging AI-caused harm, "it shall not be a defense, and the defendant may not assert, that the artificial intelligence autonomously caused the harm to the plaintiff" .

Third, the framework argues states should not "unduly burden Americans' use of AI for activity that would be lawful if performed without AI." This principle echoes "Right to Compute" bills that have advanced through several state legislatures, with one becoming law in Montana. The provision may target laws like Colorado's AI Act, which requires businesses to conduct annual impact assessments and implement risk management programs when using AI to make consequential decisions in hiring, lending, housing, and healthcare .

Where Are the Cracks in the Federal Approach?

The framework does carve out some areas where states retain authority, though the language creates significant ambiguity. States would keep their traditional police powers to "enforce laws of general applicability against AI developers and users, including particular laws to protect children, prevent fraud, and protect consumers." However, the phrase "general applicability" is a legal term of art that could be interpreted narrowly, potentially allowing preemption of AI-specific child safety laws while preserving general-purpose laws that incidentally address AI-related child safety .

This interpretation stands in contrast to Senator Marsha Blackburn's draft AI bill, which preserves any state child safety laws offering greater protection to minors than the federal bill itself. It also somewhat conflicts with President Trump's executive order directing creation of the AI policy framework, which specifically stated the framework must not propose preempting "otherwise lawful State AI laws" related to protecting children .

The framework also protects state authority over procurement requirements and rules for how state-provided services use AI, with explicit emphasis on law enforcement and public education. Additionally, states retain authority over zoning laws and other regulations determining the placement of AI infrastructure, a meaningful carve-out given that local resistance to data centers has delayed or blocked some projects .

How States Are Pushing Back on AI Regulation

  • Development Oversight: States like California have enacted laws requiring AI companies to publish their own risk management frameworks, covering cybersecurity and built-in safeguards for frontier AI systems.
  • Developer Accountability: Colorado and California have passed laws expanding liability for AI developers when their systems cause harm, either through duty of care requirements or by removing the defense that AI acted autonomously.
  • Professional Licensing Restrictions: Nevada and Illinois have enacted laws extending prohibitions on unlicensed therapy practice to AI chatbots, while New York has proposed legislation creating liability for chatbots offering unauthorized legal advice.

White House science and technology policy adviser Michael Kratsios suggested the Trump administration would extend preemption principles to state laws "banning particular verticals," specifically reacting to New York's Senate Bill 7263, which aims to create liability for chatbot operators engaging in unauthorized professional practice. Notably, neither Nevada nor Illinois law creates a pathway for chatbots to obtain professional licenses themselves .

"The framework identifies key areas to address," said Senator Maria Cantwell, the ranking Democrat on the Senate's commerce committee.

Senator Maria Cantwell, Ranking Member, Senate Commerce Committee

The framework emphasizes preempting what it calls "cumbersome" state AI laws, particularly those that "impose undue burdens," govern areas "better suited to the Federal Government," or conflict with the White House's goal of achieving "global AI dominance." This language reveals the administration's underlying concern: state-by-state regulation could fragment the AI market and disadvantage American companies competing globally .

The tension between federal and state authority over AI governance will likely define AI policy before the 2026 midterm elections. While the framework has gained Republican support, the ambiguity around child safety protections and the carve-outs for state procurement and infrastructure authority suggest the final legislation could look quite different from what the White House proposed. States that have already invested in AI regulation frameworks may find themselves in a complex legal landscape as Congress moves forward with preemption legislation.