America's failure to regulate artificial intelligence isn't primarily a technology problem,it's a democracy problem. The White House's recent national framework for AI, released in March 2026, reveals how concentrated wealth and the erosion of democratic institutions have made meaningful AI governance nearly impossible. Why Is America Struggling to Regulate AI When Europe Isn't? The Trump administration's AI framework asks Congress to preempt all state AI laws, avoid creating any new regulatory body, and shield AI developers from liability. On the surface, this looks like a straightforward deregulation play. But the real story runs much deeper. The framework represents the culmination of a decades-long effort by the tech industry to dismantle the democratic infrastructure that would allow government to govern effectively. Consider what happened in Colorado, which passed the first comprehensive state AI law in the nation. Within months, more than 150 industry lobbyists worked to strip the law down to almost nothing. Gone were the duty of care standard, bans on algorithmic discrimination, and impact assessments. State Senator Julie Gonzales observed that "all 35 of us in this building know that we too have witnessed the stunning brunt of AI leverage". This wasn't a failure of lawmakers to understand the technology. It was the predictable outcome of an imbalance of power. The White House framework follows a familiar playbook: argue that a patchwork of state laws creates uncertainty, then ensure the federal substitute does little or nothing. The result is preemption as a weapon, not a solution. How Did Money in Politics Break AI Governance? The roots of this crisis trace back to the 2010 Citizens United Supreme Court decision, which removed constraints on political spending. The consequences have been staggering. In the 2024 federal elections, just 300 billionaires and their immediate family members gave 19 percent of all contributions,more than $3 billion,either directly or through political action committees. This concentration of wealth has fundamentally reshaped what policies are even possible to pursue. Once spending constraints evaporated, the entire political landscape shifted. Public financing became inviable. Both major parties became structurally dependent on the same donor class. The range of policies either party could pursue narrowed to what that class would tolerate. AI rules fell out of favor before the conversation even started. The tech industry has poured millions into positioning itself with the current administration. Yet consumers of AI technology have no comparable seat at the table. Information about how people interact with these systems could transform public understanding, but companies have never been required to divulge it. Social media and AI companies know exactly how algorithms extract attention, exploit emotional needs, and affect decision-making, yet this knowledge remains hidden from the public. What Specific Gaps Does Trump's AI Plan Leave Unfilled? The White House framework does call for federal action on five areas: child safety, AI-enabled fraud, intellectual property licensing, workforce training, and energy infrastructure. However, the framework's deregulatory core focuses on frontier AI development,the training of cutting-edge AI systems,where it simultaneously preempts states from regulating while ensuring the federal government doesn't step in. The administration argues that frontier AI development is "an inherently interstate phenomenon with key foreign policy and national security implications," which is precisely why the federal government should oversee it. Yet the plan explicitly rejects creating any new federal rulemaking body. Instead, it relies on existing sector-specific regulators and industry-led standards. This approach leaves a critical governance vacuum for some of the most consequential AI risks: - Bioweapons assistance: AI systems could help bad actors develop biological weapons, yet no sector-specific regulator oversees general-purpose AI models - Autonomous cyber offense: AI could enable large-scale cyberattacks with minimal human intervention, but there is no mandatory guardrail framework - Unintended or uncontrollable model behavior: As AI systems become more powerful, the risk of losing control over their actions increases, yet industry-led standards offer no binding commitments The problem is that industry-led standards can disappear at the whim of a handful of increasingly powerful CEOs. Anthropic recently overhauled its Responsible Scaling Policy, and OpenAI dissolved its alignment team entirely. The framework offers no plan for when the next safety commitment quietly vanishes. Steps for Policymakers to Address the Governance Vacuum While the Trump administration forecloses debate on frontier AI governance, experts have proposed alternative approaches worth considering: - Create a coordination body: Establish an entity modeled on the National Institute of Standards and Technology to oversee general-purpose AI models without creating a new regulatory agency - Empower an existing agency: Assign oversight of frontier AI development to an established federal agency with the authority to set mandatory guardrails and conduct regular audits - Stress-test competing proposals: Evaluate different governance models by examining their underlying assumptions about how AI development works and what risks matter most A recent report from Georgetown's Center for Security and Emerging Technology offers a framework for policymakers to evaluate these competing proposals. But the Trump administration's silence on frontier AI governance is itself a policy choice, and a consequential one. The deeper issue is that America's AI governance crisis reflects a broader erosion of democratic capacity. For 25 years, regulatory safeguards in areas like auto safety, food safety, campaign finance, and data privacy have been weakened by concentrated industry power. The AI industry's current dominance is not an anomaly. It is the latest, most brazen expression of a decades-long project to replace democratic accountability with the exercise of raw power. Until America addresses the underlying condition of captured politics and billionaire influence, meaningful AI governance will remain out of reach. The question is not whether technology moves too fast for regulation. The question is whether democracy can survive when the wealthiest industries have more power than the people's elected representatives.