Why Governments Are Treating AI Like Any Other Tool, Not a Threat
Governments shouldn't regulate AI as a revolutionary threat requiring special oversight; instead, they should treat it as a powerful tool for delivering services more efficiently while managing concrete risks like data breaches, biased algorithms, and misinformation. This perspective, emerging from Harvard Kennedy School policy experts, challenges the prevailing approach to AI governance and offers a practical framework for how public institutions should think about artificial intelligence in 2026 and beyond .
What Are the Real Risks Governments Face When Using AI?
When governments deploy AI systems, they inherit a specific set of vulnerabilities that differ from private sector concerns. Mark Fagan, a lecturer in public policy at Harvard Kennedy School, outlined the concrete challenges that public institutions must address when integrating AI into constituent services. These risks aren't theoretical; they directly affect citizens' access to information and fair treatment .
- Data Privacy Breaches: Governments hold sensitive personal information about citizens, and when that data enters AI systems, there's a risk of individual identification despite anonymization efforts. Careful data governance and limiting what information gets fed into algorithms are essential safeguards.
- Hallucinations and False Information: Large language models predict the next word based on patterns in training data, which can produce plausible-sounding but entirely false information. Verification at multiple stages is the only reliable solution.
- Algorithmic Bias: When training data doesn't represent the full population, AI systems can perpetuate discrimination, particularly in hiring and benefit allocation decisions.
- Disinformation at Scale: AI makes it easier to create convincing fake images, videos, and documents, requiring governments to actively monitor and build community trust.
"People should be thinking about AI as another tool in the toolkit for delivering constituent services with quality, efficiency, and fairness. AI is a very powerful tool. It's an evolving tool. But it is only a tool," said Mark Fagan, lecturer in public policy at Harvard Kennedy School.
Mark Fagan, Lecturer in Public Policy, Harvard Kennedy School
Fagan illustrated how AI could improve government services with a practical example: a school enrollment chatbot available 24/7 in multiple languages would serve families far better than requiring phone calls during business hours. But that same system could exclude people without technology access or digital literacy, raising fairness concerns that go beyond the technology itself .
How Should Policymakers Build AI Regulation That Actually Works?
Rather than creating a single "super regulator" for all AI applications, Jason Furman, a Harvard economist and policy expert, proposed five principles that should guide how governments think about AI regulation. These principles reject both the precautionary approach (requiring AI to prove it's perfectly safe before deployment) and the hands-off approach (allowing AI to develop without oversight). Instead, they advocate for balanced, domain-specific governance .
- Balance Benefits Against Risks: The precautionary principle would delay AI deployment until all risks are eliminated, but this approach would forfeit enormous benefits in scientific research, education, and labor productivity. The right approach weighs the costs of deploying AI against the costs of delaying its benefits.
- Compare AI to Humans, Not Perfection: AI systems are biased, make mistakes, and can be overconfident. But humans do all of these things too, often worse. The question isn't whether AI is perfect, but whether it performs better than human alternatives in specific applications.
- Strengthen Existing Regulators, Don't Create New Ones: Cars, medical devices, and financial trading already have regulatory frameworks. Rather than establishing a separate AI regulator, governments should equip existing regulators with AI expertise so they can evaluate whether outputs are safe and effective.
- Prevent Regulation From Protecting Incumbents: Large AI companies sometimes support regulation they can afford to comply with, which smaller competitors cannot. Overly burdensome rules risk entrenching monopoly power and stifling the competition that has driven AI innovation.
- Recognize What Regulation Cannot Solve: Some problems like preventing AI-generated abuse material or bioweapon development urgently need solutions. But regulation alone cannot ensure AI reduces inequality or makes work more meaningful; those problems require broader policy solutions like progressive taxation and education reform.
"AI is biased. AI gets into car accidents. AI can be overconfident and make stuff up. But guess what? Humans do all that too, and in many cases they do all of that even more and worse," explained Jason Furman, a policy expert at Harvard Kennedy School.
Jason Furman, Policy Expert, Harvard Kennedy School
What Expertise Do Government Organizations Actually Need?
Building effective AI governance isn't just about writing rules; it requires developing internal capacity across multiple domains. Fagan emphasized that government agencies need three distinct types of expertise to use AI responsibly. Technical expertise includes machine learning specialists, data scientists, and cybersecurity professionals who understand how these systems work. Regulatory and ethics expertise covers knowledge of applicable laws and frameworks for ethical decision-making. Perhaps most importantly, organizations need people who can operate under uncertainty, since AI development is happening in real time and requires constant learning and adaptation .
The European Union's AI Act provides a concrete model for this risk-based approach. Rather than banning all AI or allowing it everywhere, the EU created a pyramid structure: at the top are applications AI cannot do at all (like human cloning or social scoring systems), in the middle are applications requiring transparency and oversight, and at the bottom are low-risk uses with minimal restrictions. This framework acknowledges that not all AI applications pose equal risks and that regulation should match the threat level .
Government agencies can also tap external expertise from the broader AI community, including nonprofits, academic institutions, and other public sector organizations working on these challenges. This collaborative approach helps agencies avoid reinventing solutions and benefit from shared learning across sectors .
Why Does the Comparison Between AI and Human Performance Matter?
The most significant shift in thinking about AI regulation involves changing the baseline for comparison. Rather than asking whether AI is perfect or risk-free, the relevant question is whether AI performs better than the human alternative in a specific context. This reframing has profound implications for how governments should evaluate AI deployment .
When a government considers using AI to screen job applications, the question isn't whether the algorithm will ever make a biased decision. The question is whether it makes fewer biased decisions than human hiring managers reviewing the same applications. When evaluating an AI system for medical diagnosis, the comparison isn't to a perfect diagnostic tool that doesn't exist; it's to the accuracy and consistency of human doctors. This approach acknowledges that all decision-making systems, human or algorithmic, have flaws. The goal is improvement, not perfection .
This principle also applies to the speed and scale at which AI can operate. A government agency facing budget constraints and increasing constituent demands can use AI to provide services more efficiently than hiring additional staff. The relevant comparison is between AI-assisted service delivery and the status quo of longer wait times or reduced service quality, not between AI and some hypothetical perfect system .
As governments continue integrating AI into their operations, the framework emerging from Harvard experts suggests that success depends less on creating comprehensive new regulations and more on applying existing regulatory expertise to AI-specific questions, building internal capacity for responsible deployment, and maintaining realistic expectations about what technology can and cannot accomplish.