AI governance rules are being written in Pentagon procurement contracts, State Department cables, and vendor agreements, not in legislatures or regulatory agencies. This means the actual operating rules for how artificial intelligence systems function in critical contexts like military operations, intelligence work, and infrastructure are being decided through channels where affected governments and populations have no voice. For privacy and AI governance professionals, especially those working outside the United States, this creates a governance gap that traditional regulation cannot fill. How Are Military Contracts Reshaping AI Governance? Three recent events reveal how procurement has become the real regulatory layer for AI systems. First, the Pentagon reportedly designated Anthropic, the company behind Claude, a popular AI assistant, as a "supply chain risk" typically reserved for foreign adversaries threatening military infrastructure. The reason was not a security breach, but a contract dispute: Anthropic had sought to restrict how Claude could be used in military operations, and the Department of Defense responded by treating that disagreement as a national security problem. Second, Claude was reportedly used in U.S. military operations against Iran and was involved in operations in Venezuela, apparently beyond the usage restrictions Anthropic had attempted to set. Third, on February 25, the Department of State instructed U.S. diplomats worldwide to actively oppose foreign "data sovereignty" and data-localization initiatives, framing cross-border data governance not as a legitimate regulatory choice, but as an obstacle to American interests. None of these events involved legislation. None went through a transparent rulemaking process. And none included any mechanism for input from the governments or populations most affected by their consequences. Yet together, they are shaping the operative rules of AI governance more decisively than any regulation currently on the books. What Does This Mean for Privacy Professionals in the Global South? For practitioners working in AI governance and data protection, especially in Latin America and the Global South, these developments are not distant geopolitical headlines. They have direct, practical implications that affect how organizations can govern AI systems within their jurisdictions. The challenge operates on three levels: - Procurement as Hidden Governance: When the terms of AI deployment in the most consequential contexts, such as military operations, intelligence work, and critical infrastructure, are set through vendor contracts and security designations rather than public regulation, the governance framework that matters is the one embedded in the contract, not the one in the statute book. For privacy professionals advising organizations that procure U.S.-built AI systems, the compliance baseline they work with may be shaped by terms they have never seen and had no role in negotiating. - Security Exceptions Overriding Safeguards: The reported use of Claude in military operations apparently beyond developer-imposed restrictions signals a pattern in which national security framing overrides both corporate governance commitments and the technical safeguards AI companies present as evidence of responsible deployment. When security exceptions become the norm rather than the exception, the entire framework of "responsible AI" that governance professionals rely on loses predictive value. - Diplomatic Pressure Narrowing Policy Space: The State Department cable did not just promote open data transfer; it told diplomats to actively push back against data sovereignty initiatives. These are the same frameworks that countries across Latin America and the Global South are building right now to give their citizens a say in how AI systems handle their data. If you are working on data protection policy in the region, the message is clear: your regulatory options are being narrowed, not by law, but by diplomatic pressure and what amounts to digital protectionism in reverse. What Is "Algorithmic Governance Dependence" and Why Should You Care? The deeper problem goes beyond any individual policy dispute. Countries in the Global South that adopt U.S.-built AI systems are not simply acquiring tools. They are inheriting governance logic, which vendors are "trusted," what uses are permitted, and what data flows are required, that was defined upstream through contracts, security designations, and diplomatic negotiations in which they had no voice. This creates what experts call "algorithmic governance dependence": a dynamic in which AI adoption generates productivity gains and modernization, but without technological sovereignty or meaningful participation in the governance decisions that shape how these systems operate. The stakes are enormous. A recent World Economic Forum and McKinsey report estimates AI could raise Latin America's productivity by 1.9% to 2.3% annually and generate USD 1.1 trillion to USD 1.7 trillion in additional economic value. This is a transformative opportunity for a region where productivity growth has averaged just 0.4% per year over the past quarter century. But the critical question is: on whose terms will that AI adoption happen ? Algorithmic governance dependence is a new way of structural dependence, one that operates through code, contracts, imposition of ethical and technical standards, and cloud infrastructure, rather than through traditional economic mechanisms. But it echoes dependency patterns the region knows well. And it is more fragile than it appears: as South Korea's recent warning that the Iran conflict could disrupt semiconductor manufacturing materials illustrates, the physical supply chains underlying AI infrastructure are themselves vulnerable to the very geopolitical instability that procurement-driven governance accelerates. Are Regional AI Initiatives Like Latam-GPT Offering a Way Out? Initiatives like Latam-GPT, the open-source language model developed by Chile's National Center for Artificial Intelligence with contributions from 15 Latin American countries, show the region is not passively accepting this dynamic. Built with regionally sourced data in Spanish, Portuguese, and eventually Indigenous languages, Latam-GPT represents a real effort to build AI infrastructure that reflects local contexts rather than importing Silicon Valley's assumptions. But the initiative also illustrates the depth of the challenge: until its planned regional supercomputer becomes operational, the model runs on Amazon Web Services. In other words, sovereignty in ambition, dependency in infrastructure. The gap between this regional model and the ecosystems of U.S. AI companies is not just technical. It is structural, and it is precisely the kind of asymmetry that procurement-driven governance deepens. The practical consequence for privacy and AI governance professionals in the region is stark. You can build the most sophisticated data protection framework in the world, but if the foundational terms of the AI systems deployed in your jurisdiction were set by a Pentagon procurement contract or a State Department cable, your framework is governing the surface while the operating logic runs underneath. How Can Privacy Professionals Respond to Procurement-Driven Governance? This is not a call for despair; it is a call for strategic clarity. Privacy and AI governance professionals, particularly those working in multilateral and regional contexts, can respond in concrete ways to address the governance gap created by invisible procurement rules and diplomatic pressure. - Treat Procurement as Governance: When governments or institutions in your jurisdiction procure AI systems, the contract terms are governance decisions and should be subject to the same transparency and accountability standards as any regulatory action. This means making vendor agreements public, requiring impact assessments, and ensuring that procurement decisions reflect local values and priorities rather than external pressure. - Build Regional Coordination on AI Governance: Initiatives like Latam-GPT and regional AI strategies emerging from bodies like the Development Bank of Latin America and the Caribbean suggest that the institutional appetite exists. But coordination must go beyond technical collaboration and address upstream power asymmetries that shape which AI systems are available, how they can be used, and who controls the data they process. - Document and Challenge Security Exceptions: When national security framing is used to override corporate governance commitments or technical safeguards, professionals should document these instances and work with regional bodies to establish clear limits on when and how security exceptions can override public governance frameworks. This creates accountability and prevents the normalization of exceptions. The reality is that AI governance professionals are operating in a system where the most consequential rules are being written without them. But by treating procurement as a governance issue, building regional coordination, and challenging the normalization of security exceptions, they can begin to reclaim agency over how AI systems operate within their jurisdictions. The alternative is to accept algorithmic governance dependence as inevitable, which would mean surrendering control over one of the most transformative technologies of our time.