Why Congress Is Pumping the Brakes on AI Regulation Before the Rules Get Too Messy

Congress is deliberately slowing down AI regulation, arguing that lawmakers need better evidence about how artificial intelligence actually affects workers and businesses before crafting federal rules. Rather than rushing to legislate, the House Workforce Protections Subcommittee is holding a series of hearings to understand AI's real-world economic impact, signaling a fundamentally different approach than Europe's strict regulatory model .

Why Is Congress Hesitant to Regulate AI Right Now?

Workforce Protections Subcommittee Chairman Ryan Mackenzie (R-PA) laid out the case for caution in a hearing titled "Building an AI-Ready America: Understanding AI's Economic Impact on Workers and Employers." The core argument is straightforward: better data leads to better policy. Without solid evidence about how AI affects employment, productivity, and competition, Congress risks creating rules that either miss the mark or stifle innovation .

The concern isn't that AI poses no risks. Rather, lawmakers worry that premature regulation could backfire. Small businesses, which are the fastest adopters of AI technology, actually expect to increase hiring rather than reduce it, according to the Small Business Administration. This counterintuitive finding suggests that the conventional narrative about AI destroying jobs may be incomplete, and Congress wants to understand the full picture before legislating .

What's the Problem With State-by-State AI Rules?

While Congress deliberates, states are moving ahead independently. New York, California, Colorado, and others have already implemented their own AI regulations targeting privacy, discrimination, and workforce concerns. This creates a growing headache for businesses that operate across state lines, particularly small companies that lack the compliance infrastructure of larger corporations .

The fragmented regulatory landscape poses several challenges for American competitiveness:

  • Compliance Costs: Businesses must navigate conflicting rules across multiple jurisdictions, driving up operational expenses and diverting resources from innovation.
  • Competitive Disadvantage: Foreign countries are investing heavily in AI research and workforce development, and patchwork U.S. regulations could slow domestic AI capabilities relative to global competitors.
  • Small Business Burden: Smaller companies lack the legal and compliance teams that larger enterprises use to manage regulatory complexity across states.

Mackenzie emphasized that American leadership in AI will be essential in the coming years, and that Congress should consider ways to streamline the regulatory framework while still protecting the public interest .

How Can Congress Balance Innovation With Safety?

The challenge lawmakers face is real: AI is advancing faster than government can typically respond. Traditional "command and control" regulation, where agencies write detailed rules and enforce them through penalties, struggles to keep pace with rapidly evolving technology. Yet leaving AI entirely to industry self-governance raises legitimate safety and fairness concerns .

One emerging approach gaining traction is the Independent Verification Organization (IVO) framework, which Virginia became the first state to advance through legislation in April 2026. Under this model, government sets outcome-based safety goals, but independent third-party organizations verify whether AI systems meet those standards. Companies voluntarily seek verification, and those that pass earn a trusted seal of approval .

"This legislation reflects a practical reality: government alone cannot keep up with the pace of AI development, and industry cannot be expected to police itself," said Andrew Freedman, Co-Founder and CEO of Fathom.

Andrew Freedman, Co-Founder and CEO of Fathom

The IVO framework is modeled on approaches used in other industries like financial auditing, product safety, and clinical trials. It sidesteps the false choice between heavy-handed regulation and no oversight at all. Instead, it creates a marketplace of independent experts who answer to the public, not to AI companies .

What's Japan Doing Differently?

While Congress gathers data and states experiment with different approaches, Japan has taken a distinctly lighter touch. In May 2025, Japan's Parliament passed the "Act on Promotion of Research and Development and Utilization of Artificial Intelligence-Related Technologies," the country's first comprehensive AI law. Unlike the European Union's detailed AI Act, Japan's legislation focuses on establishing basic policies and principles rather than prescriptive rules and penalties .

Japan's approach reflects a deliberate strategy to position itself as "the world's most friendly country for developing and utilizing AI." The law creates a framework for future regulations but imposes no monetary penalties on businesses. Instead, it establishes an AI Strategic Headquarters headed by the Prime Minister and comprised of all Cabinet members, signaling that AI governance is a whole-of-government priority .

The Japanese government has also released guidelines recommending a risk-based approach to AI governance, active stakeholder involvement, and agile responses to emerging risks. This contrasts sharply with more prescriptive regulatory models and reflects Japan's belief that innovation and safety can coexist through flexible governance .

Why Independent Auditing Might Be the Missing Piece

Beyond the IVO framework, experts are increasingly pointing to independent third-party auditing as a critical gap in current AI governance. Miles Brundage, founding executive director of the AI Verification and Evaluation Research Institute (AVERI) and former senior advisor for AGI readiness at OpenAI, argues that existing regulations like California's SB 53 and New York's RAISE Act have significant weaknesses .

"The unit of analysis should shift from individual AI models to the organizations that build them," explained Miles Brundage.

Miles Brundage, Founding Executive Director of AVERI

Brundage left OpenAI to build AVERI as an independent nonprofit focused on rigorous third-party assessment of safety and security practices at leading AI companies. The organization proposes a framework of AI Assurance Levels ranging from baseline transparency to treaty-grade verification, allowing different levels of oversight depending on the risks posed by a particular AI system .

The challenge with current safety benchmarks is that they can be gamed or manipulated, a problem Brundage calls the "Volkswagen problem," referring to the automaker's emissions testing scandal. Independent auditors help deception-proof safety evaluations by verifying not just what AI systems claim to do, but how organizations actually develop and deploy them .

What Comes Next for U.S. AI Regulation?

Congress's deliberate pace reflects a genuine policy dilemma. Rushing to regulate could entrench rules that become obsolete or harmful. Moving too slowly could allow real harms to accumulate. The emerging consensus among policymakers seems to be that the answer lies somewhere between Europe's prescriptive approach and complete deregulation .

Virginia's IVO legislation and AVERI's auditing framework suggest that the future of AI governance may involve independent experts, market-based incentives, and outcome-focused rules rather than detailed prescriptions. This approach could allow Congress to establish clear safety goals while letting industry and independent organizations figure out how to meet them .

For now, Congress is taking the time to gather evidence, states are experimenting with different models, and international competitors like Japan are positioning themselves as innovation-friendly alternatives to stricter Western regulation. The outcome of this three-way competition will likely shape AI governance globally for years to come.