The federal government is moving to centralize AI regulation, but the fight over whether Washington or individual states should set the rules is reshaping how Americans will be protected online. In March 2026, the White House released its National AI Policy Framework while Senator Marsha Blackburn unveiled her competing TRUMP AMERICA AI Act, signaling that AI governance has entered a critical legislative phase where the stakes involve children's safety, job protections, intellectual property rights, and which level of government gets final say. What's Actually in These Two Competing AI Frameworks? The White House framework takes what officials describe as a "middle-ground approach," focusing on four key areas it calls the "four Cs": children's safety, copyright protections, community safeguards, and American AI competitiveness. The administration wants Congress to establish age-assurance requirements for certain AI platforms, mandate safety features to prevent sexual exploitation and self-harm, and give parents better tools to manage their children's screen time and content exposure. Senator Blackburn's proposal goes significantly further. Her bill is intentionally expansive, weaving together child protection, intellectual property rights, labor reporting requirements, and measures to address what conservatives call "ideological bias" in AI systems. The legislation would create a federal duty of care for AI chatbot developers, requiring them to exercise reasonable care to prevent foreseeable harms, with the Federal Trade Commission (FTC) empowered to enforce violations as unfair or deceptive practices. One striking difference: Blackburn's bill includes a provision to repeal Section 230 of the Communications Decency Act after a two-year delay, eliminating long-standing immunity for online platforms regarding third-party content. This would expose companies to substantially heightened litigation risk, a move that drew hesitation even from bipartisan Senate leadership during recent hearings. Why Is Federal Preemption of State Laws So Controversial? The most contentious element of both proposals is the push to preempt state AI laws. The White House framework explicitly calls for Congress to "preempt overly burdensome state AI laws" while preserving traditional state police powers and states' ability to protect children and consumers. However, the administration wants to prevent states from regulating AI development altogether and from holding developers liable for how third parties misuse their products. This represents a dramatic shift from previous legislative efforts. In 2025, Senator Ted Cruz led an unsuccessful push for a 10-year moratorium on states enforcing their own AI laws. The House passed the measure in July, but the Senate voted 99-1 to drop it from a budget reconciliation bill, signaling strong bipartisan resistance to blanket preemption. President Trump responded by signing an executive order in December 2025 directing the Department of Justice to challenge state AI laws and tasking Congress with passing a "minimally burdensome national standard". The political calculus has shifted because both the White House and Blackburn are now bundling preemption with popular child safety measures. By fusing AI regulation with online child protection, they aim to build broader support among lawmakers who view protecting minors as non-negotiable. The White House framework explicitly carves out an exception, stating Congress "must ensure that it does not preempt states from enforcing their own generally applicable laws protecting children". How Would These Frameworks Actually Protect Workers and Creators? - Labor Transparency: Blackburn's bill mandates regular disclosure of AI-related job effects, requiring covered entities including publicly traded companies, federal agencies, and certain private firms to report data on layoffs, hiring, retraining, and unfilled positions attributable to AI adoption, with the Department of Labor collecting and publicly releasing this information. - Intellectual Property Rights: Both frameworks address creator concerns, with the White House calling for a licensing framework for collective rights systems and federal protections against unauthorized distribution of AI-generated replicas of a person's voice, likeness, or other attributes, while explicitly preserving exceptions for parody, satire, and news reporting. - Digital Replica Protection: Blackburn's bill incorporates the NO FAKES Act, which would establish a federal property right for individuals over their voice and visual likeness, making it illegal to produce or distribute unauthorized digital replicas or deepfakes. The White House takes the position that training AI on copyrighted material does not violate existing copyright law, leaving final resolution to the courts while urging Congress to monitor fair use developments and fill any gaps if needed. This cautious approach reflects ongoing litigation over whether companies like OpenAI violated copyright law by using published works to train their models. What About Innovation and Competitiveness? Both frameworks prioritize American AI dominance, but with different regulatory philosophies. The White House explicitly opposes creating a new federal rulemaking body to regulate AI broadly, instead supporting "development and deployment of sector-specific AI applications through existing regulatory bodies" and "industry-led standards". The administration calls for regulatory sandboxes to advance U.S. AI leadership and federal dataset accessibility for training purposes. Blackburn's bill takes a more hands-on approach, establishing an Advanced Artificial Intelligence Evaluation Program housed within the Department of Energy to test advanced AI systems for severe risks. This represents a middle ground between the White House's light-touch approach and calls from AI safety advocates for more rigorous federal oversight. The administration also addresses practical concerns about AI infrastructure costs. The framework calls for codifying the Trump administration's Ratepayer Protection Pledge to shield consumers from electricity cost increases driven by data center expansion and for streamlining federal permitting for AI infrastructure. This reflects growing concerns that massive AI training operations could significantly increase energy costs for ordinary Americans. What Happens Next in Congress? The White House framework is nonbinding but highly influential, and key elements are expected to inform bills already circulating among Republicans. Blackburn's bill is strategically broad and designed to preserve her status as a key negotiator for any legislative product that emerges from congressional negotiations. The timing matters. This push follows the GOP-led House Energy and Commerce Committee's recent advancement of a package of kids' online safety bills on largely partisan lines, building momentum for child-focused legislation. By bundling AI regulation with child safety, intellectual property protections, and workforce transparency, lawmakers hope to overcome the bipartisan resistance that killed previous preemption efforts. However, significant questions remain unresolved. The sources indicate ongoing debate about whether preemption should cover areas of recent state legislative focus like automated decision-making technology and AI chatbots, and whether federal standards can truly replace the patchwork of state laws without leaving gaps in consumer protection. The outcome will determine whether Americans are governed by a single national AI standard or continue to benefit from state-level experimentation and competition in AI regulation.