How AI Weapons Are Quietly Rewriting the Rules of War Before Diplomats Even Notice
Military powers are already deploying AI-enabled weapons systems and integrating them into everyday war operations, which is silently reshaping international norms around acceptable use of force before diplomats have finished debating whether to regulate them. Rather than waiting for treaties and formal agreements, the actual practices of nations like the United States, China, Russia, and Japan are establishing new understandings of what counts as normal in warfare, according to research tracking how autonomous weapons systems are designed, tested, and deployed .
Why Do Military Practices Matter More Than Treaties?
International relations scholars have long assumed that new norms emerge from formal negotiations in public forums like UN debates, where states present positions and negotiate binding agreements. But this assumption misses a critical reality: powerful norms also develop from the bottom up, through the everyday practices that militaries actually engage in . Historical examples illustrate this pattern. Landmines, for instance, were used in warfare for decades before they became the subject of international conventions. During that quiet period of experimentation and operational routine, shared understandings of what is normal and acceptable began to take shape, often implicitly and without written documentation.
The same dynamic is now unfolding with AI-enabled weapon systems. Researchers studying this phenomenon explain that norms emerging from practice can be powerful precisely because they are often unquestioned. As one researcher noted, "Slavery, for example, was once a norm in the sense that it expressed an idea of appropriate behaviour and was normalised, even though we now consider it as utterly inappropriate" . This historical perspective underscores a troubling reality: the norms being established today through military use of AI may shape international expectations about acceptable warfare for decades to come, regardless of what diplomats eventually decide.
What Is Actually Happening in War Rooms Right Now?
Rather than treating autonomous weapons as a distant, futuristic threat, researchers are following AI-enabled systems into today's military operations. They examine how targeting software, autonomous drones, and other AI systems are designed, tested, and integrated into everyday military routines. By tracing these practices across China, Japan, Russia, and the United States, researchers have documented that patterns of use and experimentation are already rewriting international norms on the use of force, even as diplomats move slowly .
The urgency of this research has intensified dramatically. When this research project began in 2020, autonomous weapons still carried a science-fiction quality. But in recent years, the use of various AI systems in warfare has accelerated significantly. The invasion of Ukraine and more recent conflicts involving Iran have brought widespread public attention to AI's role in military decision-making and weapon systems . This shift from theoretical concern to practical reality has made understanding how these systems are actually being deployed far more critical.
How Are Researchers Uncovering Hidden Military Practices?
Much of the relevant military practice remains hidden from public view, protected by secrecy and security classifications. Researchers studying this landscape employ multiple methods to piece together the picture. They attend UN meetings as observers and engage in informal discussions with diplomats and experts on the sidelines. Over time, this approach has made them trusted interlocutors, opening doors to expert meetings held under the Chatham House rule, where officials, international organizations, and industry representatives speak more freely .
These insights are complemented by meticulous open-source research. Researchers read manufacturers' press releases, technical brochures, and interviews with company representatives to understand how AI-enabled systems work. They cross-check military systems against similar civilian applications, where technical documentation is often more accessible because the underlying technologies are largely identical. While direct observation of military exercises, especially in Russia and China, is not possible, researchers have been surprised by how much can be learned from triangulation of informal conversations and publicly available documents. Major system failures often prompt greater openness, with more people speaking up and more reports appearing on how systems operated .
Steps to Understanding the Human Control Problem in AI Weapons
- Recognize the "rubber-stamping" problem: Early debates on autonomous weapons centered on the need for direct human supervisory control over use-of-force decisions. But many existing systems, such as air defense systems, already integrate automated technologies where a human supervisor receives an output from the system and must decide yes or no. This raises a critical question: is the human truly exercising control, or merely rubber-stamping the system's output?
- Understand the time pressure constraint: In some systems, humans have as little as ten seconds to make a decision about whether to attack a target. This timeframe is hardly sufficient for independent verification that the target should actually be attacked, meaning the human supervisor may lack the necessary situation awareness in the moment.
- Address design-stage decisions: Many states have become invested in defining criteria for human control and ensuring it throughout the entire life cycle of AI-based systems. However, choices that determine this are often made at the design stage, including how humans can understand the basis for the AI's output and how they can question that output. These questions must be addressed during design and testing phases, because once a system is in use, there are limits to what can be changed.
The erosion of human agency in military decision-making represents what researchers call a "governance gap" around autonomous weapons.
"By letting AI do much of the analysis, the human becomes a passive supervisor rather than an active controller. Adding AI technology increases complexity in ways that can exceed human cognitive capacities. I find this concerning: we comfort ourselves with the idea that there is still a human 'in the loop', but that human may not have the necessary situation awareness in the moment," explained Isabelle Bode, researcher leading the AUTONORMS project.
Isabelle Bode, Researcher, AUTONORMS Project
How Are Major Powers Positioning Themselves Differently?
The United States has shifted its official position significantly. It moved from strong skepticism toward regulation, arguing that existing international humanitarian law is sufficient, to greater openness under the Biden administration toward soft-law approaches. The US put forward "responsible AI" principles intended to guide military use of AI. More recently, however, the country has turned back toward deregulation, with increasing pressure on companies that try to draw red lines on the use of their technologies in warfare .
China has maintained a more ambivalent position. It initially backed calls to negotiate a new treaty banning some AI weapon systems, joining Global South countries advocating for stricter international law. Over time, however, it became clear that China's proposed prohibitions were narrowly defined and excluded much of what is already happening in practice. As dynamics with the United States have evolved, China has also sought to preserve greater room for maneuver in developing these systems .
Looking beyond official positions and focusing on actual practices reveals further nuances. Domestically, China has moved strongly toward regulating civilian AI applications to limit risks associated with technologies such as generative AI. Researchers are particularly interested in whether this regulatory approach spills over into the military domain. If systems are developed to comply with domestic regulations in the civilian sphere and then applied to military uses, they have already been subject to some regulatory constraints . This cross-domain connection could reshape how military AI systems are developed, even if formal international treaties remain elusive.
The fundamental challenge is that states do not have single, unified positions on AI weapons. Many forces pull them in different directions, and those tensions are reflected in their actual practices. Understanding these differences, rather than treating all major powers as having monolithic stances, is essential for anticipating how military AI will actually develop and what informal norms will solidify before formal governance catches up.