Why Intelligence Agencies Are Rethinking How They Use AI, and What It Means for National Security
Intelligence agencies are caught between AI's promise to accelerate decision-making and the risk that machine learning models might mask critical gaps in their understanding. At the University of Virginia's National Security Data and Policy Institute (NSDPI) inaugural conference in March, military leaders, academics, and industry experts gathered to confront a fundamental question: how can the intelligence community safely harness AI without repeating past analytical failures .
The NSDPI, established in 2024, works across government, academia, and industry to tackle urgent national security challenges using cutting-edge technology. Its researchers develop AI algorithms and other advanced tools to help analysts and decision makers make sense of massive amounts of data, connecting disparate information to identify possible threats that traditional analysis might miss .
What Are the Real Risks of Deploying AI in Intelligence Work?
The conference's AI and intelligence panel surfaced a concern that cuts to the heart of modern tradecraft. Amy McAuliffe, a distinguished visiting professor at the University of Notre Dame, drew a direct line between past intelligence failures and current AI risks. She noted that the analytic lessons learned from Iraq remain highly relevant today, particularly around factoring in confidence levels, acknowledging gaps in intelligence, and incorporating individual analysis into broader predictions .
"We do not want rapidity and recency bias, which AI models are dominated by," McAuliffe cautioned.
Amy McAuliffe, Distinguished Visiting Professor of the Practice at the University of Notre Dame
The concern reflects a deeper problem: AI systems, particularly large language models (LLMs), which are machine learning systems trained on vast amounts of text data, tend to generate plausible-sounding answers even when they lack reliable information. Rebecca Hersman, a senior research scholar at the Centre for the Governance of AI, highlighted a specific danger. "AI's reluctance to reveal that it doesn't know an answer, in the interest of being helpful, means we risk losing touch with our unknown unknowns as we assess our confidence in our conclusions," she explained .
This dynamic becomes especially problematic when policymakers lose faith in traditional intelligence tradecraft. Hersman noted that many leaders remain skeptical following intelligence failures, and when decision makers grow frustrated by the opacity of sources and methods behind intelligence conclusions, they may turn to their own LLMs as trusted advisors, bypassing the intelligence community entirely .
How Should Intelligence Agencies Integrate AI Into Their Workflows?
Despite these risks, panelists agreed that AI is already embedded in intelligence work and offers genuine benefits when deployed thoughtfully. Wayne McCool, a former senior intelligence executive, acknowledged concerns but offered a note of optimism, noting that much of the intelligence community has been using AI for years in workflows related to automated tasks such as map-making and helping analysts digest overwhelming amounts of data .
The key difference lies in how agencies approach AI integration. Bruce Frost, vice president of intelligence at Rhombus Power and a veteran of over thirty years of federal service, emphasized a critical distinction: "Are you trying to convert a model made for a commercial purpose to serve an intelligence purpose? Or are you building a model from scratch? It's usually better to build from scratch," he explained .
This distinction matters because commercial AI models, trained on internet data and optimized for general-purpose tasks, can carry biases that make them unsuitable for intelligence analysis. Just as human analysts must interrogate their own biases, machine learning models require careful scrutiny. Frost stressed that companies providing intelligence services have major responsibilities as AI continues to reduce data processing timelines and the intelligence community moves toward private-public partnerships to gain productivity benefits .
Steps for Building Trustworthy AI in Intelligence Operations
- Build Models From Scratch: Rather than adapting commercial AI systems, intelligence agencies should develop models specifically designed for classified analysis, reducing the risk of embedded commercial biases affecting national security decisions.
- Prioritize Explainability: McAuliffe emphasized that AI's ability to explain its decisions and reasoning is crucial to building confidence in conclusions, though she noted this capability does not yet exist reliably in the public or private sector.
- Maintain Confidence Levels: Agencies should require AI systems to clearly indicate uncertainty and gaps in knowledge rather than generating plausible-sounding answers, preserving analysts' awareness of what remains unknown.
- Integrate Human Expertise: McCool noted that AI works best when experts know how to properly integrate it into analysis, using it to enhance rather than replace human judgment in critical decisions.
The broader context for this debate extends beyond tradecraft. The conference also addressed how AI's rapid advancement is creating new dependencies in global supply chains. Critical minerals and other inputs required to manufacture AI chips are increasingly concentrated in specific countries, creating vulnerabilities in the defense industrial base .
General Robert Neller, the 37th Commandant of the Marine Corps, opened the conference by asking a deceptively simple question: "What is the problem we're trying to solve?" His remarks underscored how the character of warfare itself is changing. "It used to be just air, land, and sea," he noted. "Now we have six or seven domains to worry about, and the most important to me are space and cyber. The more I read about Claude, Gemini, ChatGPT, it's terrifying and exciting at the same time" .
The intelligence community's challenge is to harness that excitement while managing the terror. As AI systems become faster and more capable, the stakes of getting integration right have never been higher. The conference made clear that the answer lies not in rejecting AI, but in building it thoughtfully, with human expertise and institutional safeguards at the center of the process.