The recent standoff between Anthropic and the Pentagon over artificial intelligence (AI) use has exposed a fundamental crisis of trust between tech companies, the federal government, and the American public. When Anthropic refused to agree to Pentagon terms that would allow unrestricted military use of its AI models, the company drew a line on two specific issues: mass surveillance of US citizens and autonomous weapons development. The Pentagon's response to designate Anthropic as a supply chain risk triggered a broader reckoning about who controls AI and what it should be used for. Why Do Americans Distrust AI More Than People Worldwide? The skepticism driving the Anthropic controversy reflects a striking divide between American public opinion and global sentiment. A 2025 poll conducted by Gallup and the Special Competitive Studies Project found that 60 percent of Americans distrust AI somewhat or fully. This stands in sharp contrast to other major economies. According to Stanford's annual AI Index, large majorities in China, Indonesia, and Thailand, ranging from 75 to 80 percent, believe AI-powered products offer more benefits than drawbacks. In the United States, that number drops to just 39 percent. Several concrete concerns drive this American skepticism. Safety fears dominate public discourse, including worries about AI-driven psychosis and AI-enabled teen suicides. Environmental and resource concerns have also gained traction, with social media campaigns calling for boycotts over the energy and water demands of data centers powering AI systems. Perhaps most immediately, job displacement anxiety has intensified following high-profile mass layoffs. When fintech company Block cut 40 percent of its workforce due to AI integration into company workflows, it amplified fears about broader workforce contractions. How Are States and Communities Pushing Back Against AI Expansion? Public resistance to AI development has translated into concrete political action at multiple levels. More than 1,500 AI-related bills were introduced in state legislatures in 2026 alone, many focused on protecting consumers and minors from AI-related harms. This legislative activity reflects bipartisan concern. Data centers have drawn criticism from both left-leaning environmental advocates and deep-red communities alike. The economic impact of this resistance is substantial. A study found that twenty data center projects were blocked in the second quarter of 2025 due to local opposition, representing $98 billion in stalled investment. This opposition has shifted even politicians who previously championed data center development. At least six Democratic governors used their state of the state addresses to announce plans to roll back incentives or impose new regulations on data centers. Democratic lawmakers in New York and Maine, as well as Republican lawmakers in Oklahoma, are calling for temporary bans on new data center construction. Ways to Understand the Government's Competing Priorities on AI - National Security Focus: The Trump administration has made AI a national priority, positioning the United States as competing directly with China for AI dominance. The administration released seven executive orders related to AI in 2025, signaling intent to "sustain and enhance America's global AI dominance in order to promote human flourishing, economic competitiveness, and national security". - Development Acceleration: The administration has expanded AI education opportunities, worked to harness AI for scientific research, and accelerated permitting for data center construction. It has also attempted to prevent states from passing laws regulating AI, though these efforts faced significant pushback. - Public Reassurance Measures: Recognizing public resistance, the administration introduced a ratepayer protection pledge calling on technology companies to cover the cost of increased energy production to support data center buildout, preventing costs from being passed to local communities. Seven of the largest AI companies have signed on to this pledge. The tension between the administration's maximalist AI position and public opinion became starkly apparent during the Anthropic controversy. The administration's stance that contracts with AI companies should provide flexibility for the government to employ AI for "all lawful uses" runs directly counter to US public opinion. In fact, 80 percent of US adults believe the government should maintain rules for AI safety and data security, even if doing so slows development. What Happened When OpenAI Signed a Pentagon Deal? The public reaction to OpenAI's announcement on February 27 that it had signed a Pentagon deal claiming to contain the same provisions Anthropic had fought for revealed the depth of public distrust. Uninstalls of the ChatGPT app jumped 295 percent overnight, and a #QuitGPT campaign gained momentum on social media. Some OpenAI employees publicly criticized their company's stance, and the company's hardware lead resigned in protest. Anthropic's legal response to the Pentagon's supply chain risk designation attracted support from an unusually broad coalition. The case drew amicus briefs from tech sector workers, Catholic theologians and ethicists, and the American Civil Liberties Union. A brief signed by almost forty employees from Google and OpenAI, including Google's chief scientist, affirmed shared concerns about the risks underlying Anthropic's contractual red lines. Their brief noted the dangers to US democracy posed by AI-enabled surveillance and warned that today's AI systems are too immature to be relied on for use in lethal autonomous weapons. The Anthropic standoff has crystallized a larger governance challenge: how to balance rapid AI development with legitimate public concerns about surveillance, autonomy, and democratic integrity. As more than 1,500 state-level AI bills circulate through legislatures and public trust remains fragile, the outcome of this dispute will likely shape how governments and companies negotiate AI governance for years to come.