Sam Altman's relationship with the Trump White House reveals how closely the AI industry is shaping the federal rules meant to govern it. Just days after Trump returned to office, Altman stood behind the presidential seal praising the president for the $500 billion "Stargate" AI infrastructure initiative, telling him, "For AGI to get built here, to create hundreds of thousands of jobs, to create a new industry centered here, we wouldn't be able to do this without you, Mr. President." Altman had donated $1 million to Trump's inaugural committee weeks earlier and attended the swearing-in ceremony at the Capitol, cementing OpenAI's position as a favored player in the administration's AI strategy. Why Is the White House Siding With OpenAI Over State Regulators? The White House is expected to send Congress a legislative framework for federal AI regulation, but the pattern of policy decisions suggests it will prioritize industry interests over state protections. The administration has already attempted multiple strategies to block states from enforcing their own AI laws. In May 2025, the House passed the One Big Beautiful Bill Act, which would have placed a 10-year moratorium on any state enforcing laws affecting "artificial intelligence models," "artificial intelligence systems," or "automated decision systems." However, senators voted 99-1 in an overnight session to remove the provision, with Senator Marsha Blackburn of Tennessee leading the charge. Blackburn warned that the moratorium "could allow Big Tech to continue to exploit kids, creators, and conservatives". When that failed, the White House tried again in November by pushing to attach AI preemption language to the National Defense Authorization Act. That effort drew bipartisan opposition from Florida Governor Ron DeSantis, Arkansas Governor Sarah Huckabee Sanders, and Senator Josh Hawley, and was excluded from the final bill unveiled December 7. Four days later, Trump signed an executive order directing the Attorney General to establish an AI Litigation Task Force to sue states over their AI laws and threatening to block states with certain regulations from receiving Broadband Equity Access and Deployment funding, a massive federal grant program established by the 2021 Bipartisan Infrastructure Law to expand high-speed internet access in underserved communities. How Is the White House Handling Companies That Resist Compliance? The relationship between the White House and the AI industry it is writing rules for reveals a clear pattern: reward compliance and punish independence. When Defense Secretary Pete Hegseth gave Anthropic CEO Dario Amodei a deadline to lift restrictions on its AI model Claude, restrictions against use in mass domestic surveillance and autonomous weapons, Amodei refused. Anthropic said the Pentagon's proposed compromise language was "paired with legalese that would allow those safeguards to be disregarded at will." Trump responded by ordering every federal agency to stop using Anthropic's technology, posting on Truth Social that "The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War." OpenAI stepped in and reached a deal after Anthropic had been designated a supply chain risk, and Google is deepening its ties as well. The framework coming to Congress is expected to be shaped primarily by David Sacks, Trump's AI czar, a venture capitalist whose financial network is deeply embedded in the AI industry he is now tasked with regulating. According to a New York Times investigation, Sacks remains invested in 449 companies with AI products and secured not one but two ethics waivers allowing him to shape federal AI policy while holding those stakes. A government ethics expert at Washington University described these as "sham ethics waivers" that were "aimed at enabling Sacks to profit from his government position". Steps to Understanding the Stakes in Federal AI Regulation - Track Executive Orders: Monitor how the Trump administration uses executive power to override state AI laws and condition federal funding on compliance with federal standards, bypassing Congressional debate. - Follow Congressional Pushback: Watch for bipartisan opposition in Congress, such as Senator Marsha Blackburn's efforts to preserve state authority and the recent "TRUMP AMERICA AI Act" draft that proposes a duty of care on AI developers. - Examine Conflict of Interest Disclosures: Review ethics waivers and financial disclosures for federal AI officials to understand whose interests are being served by proposed regulations. The framework is expected to touch on what Sacks calls "the four C's": child safety, communities, creators, and censorship. That language tells you almost nothing about liability, labor, or accountability, the core concerns that state regulators have been trying to address. A significant development emerged as this policy battle was unfolding. Senator Marsha Blackburn released a discussion draft of the "TRUMP AMERICA AI Act," a nearly 300-page bill that would place a duty of care on AI developers, sunset Section 230, and incorporate bipartisan legislation on child safety, creator copyright protections, and AI-related job reporting. According to the source material, this is "by a wide margin, the most substantive federal AI proposal to date," and its existence already reshapes the landscape and complicates the White House's plans. The stakes are high. If the White House succeeds in preempting state AI laws, Americans in most states will face weaker protections than those living in states with existing regulations. If Congress goes its own way, as Blackburn's proposal suggests is possible, the outcome could be very different. The question now is whether Altman's influence and OpenAI's close ties to the administration will be enough to shape federal AI policy in the company's favor, or whether Congressional resistance will prevail.