Why States, Not Washington, May Hold the Key to AI Trust

State governments are emerging as the primary stewards of AI governance in America, using their existing regulatory authority over health care, education, and workplace safety to shape how the technology develops and deploys. Rather than waiting for federal rules to catch up with rapid AI advancement, states are taking action through new laws, executive orders, and enforcement mechanisms that directly influence how companies build and use AI systems .

The urgency is real. A Fox News poll found that 66 percent of registered voters expressed concern about AI, while Pew Research Center data revealed that only one in ten Americans feel they have significant control over whether AI is used in their lives, despite more than six in ten wanting greater control . This trust gap represents a fundamental challenge to AI's commercial success and social acceptance.

Why Are States Better Positioned Than the Federal Government?

States have constitutional authority and practical experience regulating industries that AI is now transforming. From food safety and building codes to electrical grids and professional licensing, states have historically been the first to establish safety standards for new technologies. This pattern is repeating with AI .

The regulatory landscape gives states significant leverage. Twenty-two states are primary regulators of occupational health and safety, making them gatekeepers for AI deployment in industrial automation. States administer Medicaid, which finances approximately one in five dollars of national health spending, giving them influence over AI adoption in health care. State departments of education and local school boards control access to K-12 and secondary education markets, another critical frontier for AI deployment .

"From AI development and deployment to data center infrastructure and energy generation and distribution, US states will always be necessary partners in AI governance," stated Trooper Sanders in a perspective piece on state-led AI stewardship.

Trooper Sanders, TechPolicy.Press

Beyond formal regulatory authority, states wield soft power through procurement decisions, market-based incentives, and public participation mechanisms. When California Governor Gavin Newsom issued an executive order requiring the state to assess AI companies seeking to do business with California based on their efforts to address exploitation, illegal content distribution, bias, and civil rights violations, it sent a signal that rippled across the industry .

How Can States Build Effective AI Governance?

State governments can implement a comprehensive approach to AI governance that combines multiple tools and strategies. This integrated strategy addresses the reality that AI technology evolves faster than laws can be written and implemented, requiring states to use both traditional regulatory tools and newer mechanisms to stay ahead of risks.

  • Legislative Action: New AI model transparency laws enacted in 2025 are already concentrating the minds of AI labs and spurring states to act between troubling disclosures and real-world harms .
  • Executive Leadership: Governors can use executive orders, state of the state addresses, and public recognition of good practices to elevate the political significance of AI safety and signal expectations to industry .
  • Agency Enforcement: State attorneys general, insurance commissioners, and other agency leaders can tap their oversight, enforcement, and convening powers to focus industry attention on public concerns about AI safety and fairness .
  • Regulatory Leverage: Existing regulatory regimes covering industries deploying AI can be modified to influence AI product safety and model governance without requiring entirely new frameworks .
  • Procurement Power: State spending and investments serve as powerful levers for influencing commercial practice, as companies seek contracts with government agencies .
  • Interstate Coordination: Compacts and coordinated efforts between governors can aggregate the influence of multiple states to create streamlined, de facto national policy .

The challenge states face is timing. AI capabilities advance rapidly while laws evolve slowly, creating a pacing problem where new regulations can quickly become irrelevant. Additionally, conclusive evidence that AI is causing harm may emerge too slowly for regulation to limit near-term dangers. This is why non-legislative tools matter as much as formal lawmaking .

What Is the Real Problem Behind AI Governance Failures?

While public attention often focuses on AI model safety and alignment, enterprise AI governance experts point to a different culprit: data access and visibility. Organizations deploying AI systems are struggling to govern how AI interacts with sensitive enterprise data in real time, and most governance failures stem from uncontrolled data access rather than model flaws .

The rise of agentic AI, which refers to autonomous AI systems that can reason and complete tasks with minimal human oversight, has intensified this problem. These agents interact with enterprise systems the same way employees do, connecting to files, APIs, customer relationship management systems, internal documentation, and customer records. This dramatically expands the attack surface and makes data visibility the foundation of any AI security strategy .

"The AI risk problem is a data problem. Most AI governance failures stem from uncontrolled data access, not model flaws. Data visibility is the foundation of any AI security strategy," explained Francis Odum, Founder and CEO of Software Analyst Cyber Research, at the AI Governance Leadership Forum.

Francis Odum, Founder and CEO, Software Analyst Cyber Research

Many organizations believe their existing privacy and security programs already cover AI, but this assumption itself represents a significant governance risk. Traditional privacy-by-design frameworks assumed systems would behave in deterministic, predictable ways. Generative AI breaks that model, requiring continuous oversight rather than static reviews .

Organizations now need ongoing behavioral monitoring, drift detection and anomaly alerts, model behavior testing after deployment, and post-deployment AI governance reviews. AI security must span the entire machine learning lifecycle, from training data integrity through model deployment and inference, not just the production API layer .

How Should Organizations Prepare for AI Governance?

For enterprises deploying AI systems, governance cannot remain confined to policy documents. It must become operational and embedded directly into product development, working across engineering, product, and data governance teams .

  • Data Visibility First: Before deploying any AI system, organizations must understand what data exists, where it lives, and how AI systems will access it .
  • Identity and Access Control: Manage AI agents like digital employees with centralized identity and access control, applying the same least-privilege principles used for human employees .
  • Training Data Integrity: Secure training datasets and data pipelines, as compromised training data can cause model behavior to become unreliable, with problems only surfacing much later in deployment .
  • Behavioral Monitoring: Implement continuous monitoring for model behavior and anomalies, since AI systems do not always fail in obvious ways .
  • Cross-Functional Collaboration: AI governance must be infrastructure, not a gate process, requiring collaboration between security, privacy, engineering, and product teams .

The convergence of state-level AI governance and enterprise-level data governance creates a new landscape for AI development. States are building the policy frameworks and enforcement mechanisms that will shape what companies can do with AI, while organizations must simultaneously implement the operational controls to govern how AI systems actually behave in production environments.

This dual approach, combining state regulatory authority with enterprise operational governance, represents the emerging consensus on how to balance AI's promise with public trust. Neither federal rules alone nor company self-regulation will suffice. Instead, a partnership between state governments, federal oversight, and responsible enterprise practices offers the most realistic path forward for AI that serves both innovation and public safety.