The White House Just Picked a Side in America's AI Regulation Battle

The White House released a National Policy Framework for Artificial Intelligence on March 20, 2026, signaling a major shift toward federal control over AI regulation and away from the patchwork of state laws that have emerged over the past two years. The framework outlines the administration's preferred approach to federal AI legislation, essentially telling Congress which areas need national standards and which state laws should be preempted by federal authority .

Why Is the White House Pushing Back Against State AI Laws?

The administration has been signaling for months that the proliferation of state AI laws is creating barriers to innovation and compliance headaches for companies. Last summer, it urged Congress to adopt a temporary federal moratorium preempting certain state AI laws, but Congress declined. In December 2025, the administration issued Executive Order 14365, often called the "One Rule" Executive Order, which directed the Department of Justice to establish an "AI Litigation Task Force" and instructed federal agencies to assess whether they could use discretionary funding programs to discourage certain types of state AI regulation .

The March framework follows up on that commitment by laying out exactly what the administration believes federal legislation should cover and which categories of state laws should be replaced with national standards.

What Are the Framework's Six Key Priorities?

The framework spans a wide range of policy areas, but several takeaways stand out for companies developing, deploying, or testing AI systems:

  • Child Safety and Privacy: The framework emphasizes protecting minors from AI harms and empowering parents to control their children's digital environments. Congress should adopt age-assurance requirements for AI platforms likely to be accessed by minors, provide tools for parents and guardians to manage privacy and engagement settings, and limit data collection and online behavioral advertising.
  • Community and Economic Safeguards: The framework links AI policy to broader community and infrastructure considerations, recommending that AI development strengthen local communities and small businesses. It also urges Congress to augment law enforcement efforts to combat AI-enabled fraud, impersonation, and scams targeting vulnerable populations like seniors.
  • Intellectual Property Rights: The framework states the administration's view that training AI models on copyrighted material does not violate copyright laws, while acknowledging that reasonable arguments to the contrary exist. It advises Congress not to take legislative action that would influence judicial determinations regarding fair use.
  • Free Speech Protection: The framework emphasizes limits on the federal government's authority to coerce AI providers to restrict or alter content for partisan or ideological reasons, and directs Congress to provide avenues for redress where such coercion occurs.
  • Existing Regulators Over New Bureaucracy: Rather than creating a new, centralized federal AI regulatory authority, the framework encourages relying on existing sector-specific regulators and industry-led standards.
  • Federal Preemption of State Laws: The framework supports broad federal preemption of state AI laws that impose undue burdens, while preserving states' traditional police powers to enforce laws of general applicability, especially to protect children, prevent fraud, and safeguard consumers.

Notably, the framework cautions Congress against adopting ambiguous content standards or open-ended liability regimes that could generate excessive litigation risk. Although it strongly favors federal preemption of state AI laws, it underscores that Congress should not preempt states from enforcing generally applicable laws protecting children, such as prohibition of child sexual abuse material, where such content is generated using AI .

How Should Companies Prepare for Federal AI Legislation?

The framework is not a binding document and does not by itself impose new legal obligations or direct agencies to take specific regulatory actions. Instead, it outlines a series of recommended policy approaches for Congress to consider in drafting comprehensive federal AI legislation. However, companies should begin preparing for several likely changes:

  • Age-Assurance Compliance: If your AI platform or service is likely to be accessed by minors, expect commercially reasonable, privacy-protective age-assurance requirements such as parental attestation. Implement features designed to reduce risks of harm to minors, including sexual exploitation and self-harm.
  • Parent and Guardian Controls: Build tools that empower parents and guardians by providing them with the ability to manage children's online privacy settings, content exposure, and screen time within your AI systems.
  • Data Collection Restrictions: Anticipate stricter limits on data collection for model training and targeted advertising, especially for systems accessed by minors. Ensure your data practices align with existing child privacy protections like those in the TAKE IT DOWN Act, a bipartisan law enacted in May 2025 that criminalizes the nonconsensual publication of intimate digital deepfakes.
  • Fraud and Scam Prevention: Invest in mechanisms to combat AI-enabled fraud, impersonation, and scams, particularly those targeting vulnerable populations. This is now positioned as a core component of national AI strategy.
  • Intellectual Property Licensing Frameworks: Consider participating in voluntary licensing or collective-rights frameworks that would allow intellectual property rights holders to collectively negotiate compensation from AI model developers without incurring antitrust liability.

The framework also signals that the administration does not view copyright training as a legislative issue. Instead, it encourages Congress to enable voluntary licensing or collective-rights frameworks that would allow rights holders to collectively negotiate compensation from AI model developers without incurring antitrust liability. This suggests that the courts, not Congress, will ultimately decide whether training AI models on copyrighted material violates copyright law .

What Does This Mean for the State-Versus-Federal Regulation Debate?

The framework represents a decisive move toward federal control. It calls for precluding states from regulating AI model development or imposing liability on AI developers for unlawful conduct by third parties using their systems. However, it preserves states' ability to enforce laws of general applicability, especially those protecting children, preventing fraud, and safeguarding consumers. This is a critical distinction: states won't be able to create AI-specific regulations, but they can enforce existing consumer protection and child safety laws .

This approach directly contradicts the patchwork of state AI laws that have emerged in recent years. By establishing a unified national standard, the administration argues that companies will face fewer compliance burdens and innovation will accelerate. However, it also means that states with stricter AI safety requirements will lose the ability to enforce those standards within their borders.

The framework is now in Congress's hands. Whether lawmakers adopt these recommendations will determine whether the United States moves toward a centralized, federal approach to AI regulation or continues to allow states to experiment with their own rules. For companies operating across multiple states, the outcome could significantly simplify compliance or, conversely, lock in a less stringent national standard than some states prefer.