Senator Marsha Blackburn has introduced a new federal AI framework designed to establish uniform national rules for artificial intelligence, focusing primarily on protecting children online and safeguarding copyright protections while potentially overriding state-level AI laws. The discussion draft, unveiled on March 18, 2026, represents Congress's latest attempt to create a cohesive federal approach to AI governance, combining elements from previously introduced legislation to address what lawmakers describe as a fragmented patchwork of state regulations hindering innovation. What Does Blackburn's AI Framework Actually Require? The framework establishes specific obligations for AI developers and companies deploying artificial intelligence systems. For children under age 17, the proposal creates a duty of care on developers while mandating AI chatbot safeguards, data protection standards, and mechanisms for consumers to report AI-related harms. The framework also includes a private right of action, meaning families could pursue litigation against AI companies for defective design, failure to warn, express warranty violations, and unreasonably dangerous or defective product claims. On the copyright side, the framework introduces new federal transparency guidelines requiring companies to mark, authenticate, and detect AI-generated content. The proposal tasks the U.S. National Institute of Standards and Technology with creating cybersecurity standards to prevent tampering with provenance and watermarking on AI content. Additionally, the framework requires third-party audits for bias and discrimination based on political affiliation. How Would Age Verification and Chatbot Safeguards Work? - Age Verification Requirements: Covered entities must collect age-related data from government-issued identification or other reasonable verification methods for all chatbot users under 18, with rolling reviews of previously verified accounts to ensure ongoing compliance. - Data Security Standards: The framework specifies retention periods and applies necessity and proportionality standards to verification data, addressing concerns about how companies handle sensitive information from minors. - Required Disclosures: AI companies must provide clear reminders about conversations with non-humans and non-professionals, ensuring users understand they are interacting with artificial intelligence rather than human experts. - Safety Guardrails: The framework mandates specific safeguards for AI chatbots to prevent harmful interactions with minors, building on concepts from the proposed Kids Online Safety Act (KOSA). The age verification approach gained particular relevance after the Federal Trade Commission recently issued a policy statement encouraging the use of age verification technologies while forgoing enforcement of verification data practices. When the statement was released, FTC Bureau of Consumer Protection Director Christopher Mufarridge said the agency's new stance "incentivizes operators to use these innovative tools, empowering parents to protect their children online". Why Is This Creating Tension Between Federal and State Authority? Blackburn's proposal directly addresses the Trump administration's goal to preempt state-level AI legislation, as outlined in its December 2025 executive order. However, the framework's approach to preemption has created significant debate among policy experts. Brenda Leong, Director of AI Division at ZwillGen, explained that the bill's general preemption provision in Section 1701 broadly preserves all "generally applicable" state and local AI laws. This means state or local bias audit requirements, automated decision-making obligations, transparency requirements, and algorithmic accountability frameworks would likely survive even if the legislation passes, allowing companies operating in states like Colorado, Illinois, and New York to expect those regimes to remain in force. The framework's preemption strategy differs from what some observers expected. Rather than creating a complete federal ceiling that eliminates all state action, the proposal allows states to go beyond federal protections where they see fit, particularly regarding children's safety. Digital Smarts Law and Policy Principal Ariel Fox Johnson noted that "lawmakers understand that with respect to kids, it may be very difficult to have a federal ceiling, especially when the states have been so active in passing a variety of kids privacy and safety laws, whereas Congress has been less so". What Are Experts Concerned About? While the framework has generated bipartisan interest, particularly around child safety provisions that passed the Senate on a 91-3 vote in July 2024, experts have raised concerns about specific elements. Calli Schroder, Senior Counsel at the Electronic Privacy Information Center, told the IAPP that the framework "suffers from trying to appeal to both the president and those concerned with AI's demonstrable harms". By attempting to cover so many aspects of a broad-reaching technology simultaneously, the proposal may struggle to satisfy all stakeholders. One particularly contentious provision involves potential government requests for intellectual property. Brenda Leong highlighted that covered entities deploying "advanced artificial intelligence systems" could face enforcer requests for code, training data, model weights, and other proprietary information. She emphasized that "no U.S. regulatory regime has ever conditioned the right to operate on surrendering your entire intellectual property to a government agency on demand, not in pharmaceuticals, not in defense, not in finance," raising "profound constitutional questions about regulatory takings, due process and controls on government use and profit from this information". How Does This Fit Into Broader AI Governance Efforts? Blackburn's proposal emerges as policymakers and researchers continue examining how to balance innovation with safety and privacy protections. The Future of Privacy Forum recently concluded its 16th Annual Privacy Papers for Policymakers event in March 2026, bringing together global researchers to discuss pressing areas in privacy and AI governance. The winning papers and honorable mentions highlighted important work analyzing current and emerging privacy and AI issues while proposing achievable short-term solutions that could lead to real-world policy outcomes. The timing of Blackburn's framework is significant because it attempts to jumpstart congressional dialogue toward delivering on the White House's goal to create uniform federal standards. According to reports, Blackburn has been in close contact with the White House, which is expected to introduce a separate legislative recommendation that will create a fluid policy discussion alongside Blackburn's draft. The goal is to blend the proposals as deemed fit and appropriate, arriving at the "uniform" policy mandated under the executive order. White House Special Advisor for AI and Crypto David Sacks explained that "it basically states the policy of the administration is to create that federal framework. We're going to work with Congress to define that framework, but in the meantime, this gives Trump tools to push back on the most onerous and excessive state regulations". This approach suggests the administration views federal preemption as a tool to prevent what it considers overly restrictive state-level rules while still establishing baseline protections. The framework's success will likely depend on whether lawmakers can navigate competing priorities around child protection, copyright enforcement, innovation incentives, and constitutional concerns about government authority. With wide bipartisan support for child safety provisions but expected opposition to the broader Republican approach to AI legislation, Democrats face a strategic choice about whether to support the framework to advance long-sought child protection goals or to oppose it on broader ideological grounds.