Virginia Pioneers a New Model for AI Oversight: Independent Verification Instead of Government Regulation
Virginia has signed legislation creating a groundbreaking framework for AI oversight that relies on independent expert organizations rather than government agencies to verify AI safety. Governor Abigail Spanberger signed SB 384 and HB 797 in April 2026, directing the state's Joint Commission on Technology and Science (JCOTS) to evaluate the development of Independent Verification Organizations (IVOs), independent bodies that would verify whether AI systems meet safety standards. The legislation passed with overwhelming bipartisan support: 84-14 in the House and 40-0 in the Senate, signaling rare consensus on how to govern rapidly advancing technology .
What Problem Does the IVO Framework Actually Solve?
The core challenge driving Virginia's approach is straightforward: government cannot keep pace with AI innovation, and industry self-regulation has proven insufficient. As AI systems are deployed across healthcare, education, criminal justice, and critical infrastructure, the gap between technological capability and society's ability to oversee it continues to widen. Traditional regulatory approaches, which rely on government agencies to understand and approve new technologies, move too slowly for an industry that releases new models and capabilities every few months .
The IVO framework addresses this by creating a hybrid model. Rather than government agencies directly regulating AI, Virginia would establish outcome-based safety goals, then authorize a marketplace of independent verification organizations to develop technical criteria and verify whether AI products meet those goals. Companies would voluntarily submit their systems for verification, and those that pass earn a trusted seal of approval that provides legal protection in litigation .
"This legislation reflects a practical reality: government alone cannot keep up with the pace of AI development, and industry cannot be expected to police itself," said Andrew Freedman, Co-Founder and CEO of Fathom, the nonprofit organization that championed the framework.
Andrew Freedman, Co-Founder and CEO of Fathom
How Does the IVO Model Compare to Existing Governance Approaches?
The IVO framework is not entirely new; it borrows from established governance models in other industries. Financial auditing, product safety testing, and clinical trials all rely on independent, expert-led organizations to verify compliance with standards set by government. The difference is that these verification bodies operate within a structured marketplace, competing to develop credible standards while maintaining independence from the companies they evaluate .
This approach sits between two extremes. On one end, traditional "command and control" regulation gives government agencies direct authority to approve or deny technologies. On the other end, industry self-governance relies on companies to police themselves, which has historically led to conflicts of interest. The IVO model attempts to preserve innovation speed while introducing meaningful accountability through independent technical experts who answer to the public, not to the companies building the technology .
Steps to Understanding How IVOs Would Function in Practice
- Government Sets Goals: Virginia would define outcome-based safety objectives for AI systems, such as requirements for accuracy, fairness, or transparency, without prescribing how companies must achieve them.
- Independent Organizations Develop Standards: Multiple IVOs would compete to develop technical criteria and testing methodologies that align with the government's safety goals, creating a marketplace of expertise.
- Voluntary Verification: AI companies would choose whether to submit their systems for verification by one or more IVOs, knowing that certification provides legal defensibility and market trust.
- Public Oversight: The verification organizations themselves would be subject to government oversight, ensuring they maintain independence and credibility rather than becoming captured by industry interests.
Delegate Cliff Hayes Jr., who introduced the legislation, emphasized that this approach recognizes a fundamental truth about modern technology governance. "Our legislation recognizes that a new technology requires a new approach to governance," Hayes stated. "The IVO framework offers exactly that: a way to put independent technical experts at the center of AI oversight, working within a voluntary structure that our government can oversee and the public can trust" .
Why Is Virginia Positioned to Lead on This Issue?
Virginia's leadership on AI governance is not accidental. The state hosts the world's largest concentration of data centers and has a technology economy that is central to its future economic competitiveness. This combination of technological infrastructure and economic stakes gives Virginia both the expertise and the motivation to develop governance models that protect the public while maintaining a competitive business environment .
The timing is also critical. As AI capabilities accelerate and are deployed across healthcare, education, criminal justice, and critical infrastructure in increasingly autonomous capacities, the gap between the technology's reach and society's ability to oversee it continues to widen. Senator Angelia Williams Graves, who co-introduced the legislation, framed the issue in terms of constituent concerns: "The families and communities I represent aren't asking for government to slow down innovation; they're asking for someone to be looking out for them as powerful new technologies enter their daily lives" .
The JCOTS study authorized by this legislation will lay the groundwork for what could become the nation's first operational IVO framework. If successful, Virginia's model could influence how other states and potentially the federal government approach AI governance, offering a template for balancing innovation with accountability in an era of rapid technological change .