When AI Meets Surveillance: Why States Are Borrowing Aviation's Safety Playbook
States are beginning to treat artificial intelligence failures like aviation accidents, establishing formal investigation systems to learn from AI incidents before they spiral into larger problems. A new policy framework from the Aspen Policy Academy is urging state officials to build these structured investigative processes, starting with Utah's Office of Artificial Intelligence Policy. The approach marks a significant shift from reactive regulation to continuous learning, potentially offering a blueprint for how governments nationwide can manage AI risks .
Why Are States Looking to Aviation for AI Governance?
The aviation industry has spent decades perfecting the art of learning from failure. When a plane crashes or experiences a serious incident, the National Transportation Safety Board (NTSB) launches a comprehensive investigation that examines root causes, identifies systemic vulnerabilities, and feeds findings back into pilot training, aircraft maintenance, and air traffic control procedures. This culture of transparency and continuous improvement has made commercial aviation remarkably safe .
Michelle Sipics, the Aspen Policy Academy fellow who authored the framework, explained the reasoning behind this approach.
Sipics argues that generative AI needs the same disciplined approach to safety that aviation has developed."Safety has continued to improve over the decades, and one of the reasons for that is the dedication to investigating incidents. From those investigations, the industry feeds what they learn back into everything, from how they train pilots, how they train air traffic control, designing aircraft maintenance operations, everything," she stated.
Michelle Sipics, Fellow at Aspen Policy Academy
The framework specifically addresses a critical gap in current AI governance. As more state governments deploy generative AI tools in hiring, housing decisions, and government services, officials are increasingly grappling with how to manage real-world risks such as algorithmic discrimination. Colorado lawmakers, for example, are still debating legislative changes to the state's landmark 2024 AI law, including how responsibility should be assigned to developers and deployers when something goes wrong .
What Would a GenAI Incident Investigation Actually Look Like?
The Aspen framework proposes a structured investigative process that brings together government officials, developers, and industry experts to examine what the framework calls "GenAI incidents," cases when AI systems cause direct harm through their development, deployment, or outputs. Rather than focusing on enforcement and punishment, the model emphasizes root-cause analysis and prevention, mirroring the approach that has made aviation safety so effective .
The framework also calls for companies participating in Utah's regulatory sandbox to sign a pledge committing to publicly share investigation findings, similar to incident reports published by the NTSB. This transparency requirement is central to the proposal's logic.
"Trust is not a milestone that you hit, it's something that you earn and you maintain. Both regulators and members of the public watch what you do when something goes wrong," Sipics explained.
Michelle Sipics, Fellow at Aspen Policy Academy
Utah's Office of Artificial Intelligence Policy operates one of the nation's few AI regulatory sandboxes, a controlled environment where the state can test technologies under close regulatory watch for legal and policy compliance. The office's Regulatory Relief program is designed to provide compliance exemptions for AI companies whose tools may benefit the state in the future. However, the Aspen framework identifies a significant weakness: the agency currently lacks clear processes for responding when those tools produce biased decision-making, unsafe recommendations, or other failures with financial, physical, or societal consequences .
How to Build an Effective AI Incident Response System
- Establish Clear Investigation Protocols: Create standardized procedures for documenting, analyzing, and investigating AI incidents that mirror aviation's approach to accident investigation, focusing on root-cause analysis rather than blame assignment.
- Require Public Transparency: Mandate that companies and government agencies share investigation findings publicly, similar to NTSB incident reports, so the broader community can learn from failures and improve safety practices.
- Build Cross-Sector Collaboration: Bring together government officials, AI developers, industry experts, and independent researchers in formal investigation processes to ensure comprehensive analysis and diverse perspectives on what went wrong.
- Create Feedback Loops: Establish mechanisms to feed lessons learned from investigations back into training, deployment practices, and product design, ensuring that each incident improves future safety outcomes.
- Maintain Independence: Ensure investigation bodies have sufficient autonomy and resources to conduct thorough examinations without undue pressure from the companies or agencies being investigated.
The framework builds on Utah's broader push to position itself as a national leader in AI governance. A previous Aspen Policy Academy collaboration outlined evaluation standards focused on transparency, accountability, and public trust, which according to the Office of AI Policy's website are central to the state's AI strategy .
Could This Model Scale Beyond Utah?
The implications extend well beyond Utah. The framework positions incident investigation as the next phase of AI governance, one that could help states move from reactive regulation to continuous learning. This approach could potentially offer a model for federal policymakers seeking more consistent AI oversight across the country .
However, Sipics acknowledged that scaling this approach faces real challenges. "Realistically, I think transparency is probably the best path to scale because best practices like this build in a community," she noted. "When people see you being responsible and sharing what you've learned and continuously improving the safety of your products, that has value, that gets buy-in." This suggests that the framework's success will depend less on top-down mandates and more on demonstrating tangible benefits through voluntary adoption and peer learning .
Sipics
The broader context for this framework is increasingly urgent. As frontier AI systems become more capable and more widely deployed in government and commercial settings, the potential for harm scales accordingly. The aviation model offers a proven pathway for managing that risk, but only if states and the federal government commit to the transparency, investigation rigor, and continuous improvement that the framework demands. For now, Utah's experiment will serve as a critical test case for whether this approach can work in practice.