Americans are deeply skeptical that their government can regulate artificial intelligence responsibly, with only 44% expressing trust compared to 89% in India and 72% in Israel. This trust gap isn't just a perception problem—it's a governance crisis that could undermine some of the most consequential AI applications in healthcare, benefits access, and national security. A Pew Research Center study from October 2025 reveals a stark divide in global confidence. While large majorities in countries like Indonesia (74%) and Israel (72%) trust their governments to regulate AI effectively, Americans lag significantly behind. Even more troubling, 47% of Americans actively distrust government AI regulation, and only 37% trust the European Union's approach—lower than the U.S. itself. Why Government AI Governance Matters More Than You Think The federal government isn't just another AI user—it's a proving ground for the entire nation. Agencies deploy some of the highest-stakes AI systems imaginable: algorithms that determine access to veterans' benefits, guide law enforcement priorities, manage immigration processes, and inform national security decisions. When these systems fail or lack transparency, public confidence doesn't just erode in that single application. It spreads like a crack through the entire foundation of trust in government and AI itself. The stakes become crystal clear when you examine real-world examples. The Veterans Administration's REACH VET program uses predictive models to identify veterans at elevated suicide risk so clinicians can proactively reach out. The system draws on health records and includes explicit race coding—exactly the kind of sensitive data that demands transparency and accountability. If veterans feel an algorithm is driving mental health interventions without clear explanations or guardrails, trust erodes not only in REACH VET but in the VA's entire mental health infrastructure. Similarly, the Centers for Medicare and Medicaid Services (CMS) is testing the Medicare WISeR Model, which would use AI to expedite prior authorization decisions for items and services flagged as vulnerable to fraud or inappropriate use. In practice, this means automated systems could delay or deny coverage for medically necessary prescriptions if an algorithm incorrectly flags them as suspicious. For older adults and medically complex patients already frustrated by prior authorization barriers, adding opaque AI without clear recourse mechanisms could make the system feel less like a safeguard and more like an unaccountable gatekeeper. How Government Guidance on AI Has Shifted—And What It Means The Biden administration's Office of Management and Budget released OMB Memorandum M-24-10 in 2024, establishing a government-wide framework for responsible AI use. The memo introduced formal designations for high-risk systems: "rights-impacting AI" (systems whose outputs affect civil rights, privacy, or equitable access to services like housing, education, or credit) and "safety-impacting AI" (systems whose decisions could significantly affect human life, the environment, or critical infrastructure). This framework required federal agencies to conduct risk assessments, maintain transparency, implement safeguards for high-impact systems, and establish clear waiver processes. However, the Trump administration's OMB superseded this guidance with M-25-21: Accelerating Federal Use of AI through Innovation, Governance, and Public Trust. While the new memo includes similar elements, it shifts toward greater agency discretion and flexibility—prioritizing faster AI adoption over centralized compliance requirements. The problem is real: well-funded federal agencies may have the resources to build robust governance structures, but smaller or resource-constrained agencies—including those whose tools have the greatest impact on low-income and marginalized communities—may struggle to develop equivalent safeguards. This creates a fragmented landscape where protection depends on your agency's budget, not the actual risk level of the AI system. Steps to Building Trust in Government AI Systems - Transparency Requirements: Federal agencies must clearly document how AI systems work, what data they use, and how decisions are made. Veterans and Medicare beneficiaries deserve to understand why an algorithm flagged them for intervention or denied coverage. - Risk Assessments and Inventory Documentation: Agencies should maintain detailed inventories of all AI systems in use, assess their potential impact on rights and safety, and update these assessments regularly as systems evolve or encounter new populations. - Meaningful Recourse Processes: When an AI system makes a decision that affects someone's healthcare, benefits, or legal status, there must be a clear, understandable way to challenge that decision and have it reviewed by a human with authority to overturn it. - Diverse Validation Testing: Before deploying AI in high-impact domains, agencies must test systems across racially, socioeconomically, and geographically diverse populations to catch bias and ensure equitable outcomes. - Adequate Resourcing for All Agencies: Federal guidance should include funding mechanisms or technical support to ensure that smaller agencies can meet governance standards without compromising the safety of the communities they serve. What Experts Say About the Trust Crisis The Federation of American Scientists emphasizes that trust is not a soft concern—it's the foundation for adoption, legitimacy, and long-term success of any technology. "When people doubt that AI systems are governed responsibly, they are less likely to accept their use in sensitive domains like healthcare, education, public benefits, or national security," the organization notes. Public skepticism can slow innovation, undermine compliance, and deepen polarization around emerging technologies. Importantly, this isn't a partisan issue. Republicans and Democrats alike have emphasized that trustworthy AI use is a prerequisite for public adoption and lasting legitimacy. If the U.S. is going all-in on AI—and it is—then building and maintaining public trust isn't simply a communications challenge. It's a governance imperative. The path forward requires federal agencies to demonstrate that high-risk AI systems can be governed effectively through transparency, oversight, accountability, and meaningful safeguards. Failure to do so would not only diminish confidence in AI as an economic and societal asset, but weaken the already tenuous trust the public has in government as a manager of risk and opportunity. For Americans facing healthcare decisions, benefit applications, or interactions with law enforcement, the stakes couldn't be higher.