The Hidden Rulebook: Why AI Contracts Matter More Than Laws for Sovereign Nations
When smaller nations buy artificial intelligence systems, the real decisions about data ownership, transparency, and control aren't made in parliament or international treaties,they're buried in procurement contracts. A new analysis reveals that bilateral agreements between governments and tech companies function as a "mini-constitution" for how AI actually operates in practice, often determining more about national sovereignty than any legislation ever could .
Why Are Governments Treating AI Procurement as a Diplomatic Issue?
For decades, technology policy has been treated as a technical matter, handled by procurement teams under tight deadlines with limited expertise. But as governments increasingly deploy AI for border management, welfare fraud detection, and predictive policing, researchers at Harvard Kennedy School's Carr-Ryan Centre have documented a fundamental shift: the most consequential decisions about AI governance are now being made through contract negotiations rather than public law .
This matters because procurement contracts contain provisions that directly shape how AI systems operate within a country's borders. When a ministry signs a long-term agreement for an AI tool, the contract often becomes the only binding document governing how personal data is handled, whether decisions can be logged and reviewed, and what happens if a government wants to switch providers .
"Fundamental questions regarding AI and human rights are resolved through bilateral negotiations and standard contract language, rather than through public law or multilateral agreements," explained researchers studying governance by procurement.
Harvard Kennedy School's Carr-Ryan Centre, Governance by Procurement Research
What Specific Contract Clauses Actually Control AI Sovereignty?
The devil is in the details. Several contractual elements have direct governance effects that smaller nations often overlook during negotiations :
- Data Ownership Clauses: Determine whether a public authority retains control over training data and system logs, or whether the vendor can reuse data across other clients and jurisdictions without permission.
- Audit and Inspection Rights: Decide whether regulators, judges, or independent experts can examine how a model works internally, or must rely entirely on vendor claims about system behavior.
- Update Provisions: Specify whether a technology provider can unilaterally change models and parameters, or must seek government approval and provide documentation when the system materially changes.
- Termination and Exit Conditions: Define how expensive and difficult it is to leave a particular AI architecture once it becomes embedded in government operations and administrative practice.
The United States has already demonstrated how assertive procurement can reshape AI governance. The General Services Administration proposed a "Basic Safeguarding of Artificial Intelligence Systems" clause that supersedes commercial terms, asserts government ownership over data and custom developments, and imposes disclosure obligations for AI tools used in contract performance . This approach shows that large purchasers can use procurement language as a governance tool.
How Can Smaller Nations Achieve Real AI Sovereignty Through Smarter Contracts?
Open-weight AI models, which are publicly available and can be customized by anyone, were supposed to solve this problem. In theory, they allow smaller countries to develop and adapt AI systems aligned with their own legal frameworks, languages, and social priorities, rather than relying solely on closed platforms controlled by foreign companies . But in practice, many governments still lack the internal capacity to maintain these systems independently.
Morocco's recent partnership with Mistral AI illustrates the paradox. The initiative was promoted as developing "AI made in Morocco" and reducing reliance on closed foreign platforms. However, most hosting and computational resources remain located in European data centers, and critical infrastructure sits outside Moroccan jurisdiction . Without significant investment in domestic infrastructure and contracting expertise, Morocco risks remaining dependent on external governance frameworks, even when using theoretically open technology.
To realize genuine sovereignty with open-weight models, smaller nations should reshape how they buy AI. This doesn't require grand declarations; it requires concrete protections written into contracts :
- Inspection and Audit Rights: Require robust access to documentation on model training, performance across different groups, and significant changes over time, backed by independent experts.
- Portability Requirements: Mandate that models and key data structures are exportable in interoperable formats without excessive fees, preventing vendor lock-in.
- Regional Cooperation: Develop shared contract templates and regional standards that give smaller nations collective bargaining power when negotiating with major suppliers.
- Infrastructure Control: Invest in domestic data centers and computational capacity so that critical AI infrastructure remains under national jurisdiction, not hosted abroad.
What Does This Mean for the Global AI Market?
The shift toward treating procurement as a diplomatic issue reflects broader recognition that AI governance is fundamentally about power and control. Presight, a global AI company focused on government and critical infrastructure deployment, has seen explosive growth in interest from nations seeking to build sovereign AI capabilities. The company received 376 applications from 62 countries for its AI Accelerator Program's second cohort, more than tripling the 120 applications from 17 countries in the first cohort .
Applications came primarily from the Middle East (162), Asia Pacific (84), Europe (65), and North America (42), with the United Arab Emirates (140), United States (37), India (26), United Kingdom (19), and South Korea (16) leading by volume . This geographic diversity signals that nations across every region now view AI procurement and deployment as a strategic priority.
"The tripling of applications for Cohort II reflects the success we have achieved with this Accelerator. We have AI innovators from around the world who want access to our program to replicate the achievements they saw from Cohort I and unlock the same commercial pathways," stated Magzhan Kenesbai, Chief Growth Officer of Presight.
Magzhan Kenesbai, Chief Growth Officer at Presight
The first cohort of Presight's accelerator has already generated significant commercial momentum. Cohort I companies represent a potential total contract value of $26 million currently in discussion, with $1 million in confirmed investment into NodeShift from the Presight-Shorooq Fund I . Companies like Vulcan have signed contracts advancing generative AI security, while NodeShift has secured a strategic agreement with Presight to commercialize sovereign AI infrastructure with major government clients .
The real lesson emerging from these developments is that AI sovereignty is not primarily a technical problem to be solved by building better models or acquiring more computing power. It's a contractual and diplomatic problem. Nations that understand how to negotiate procurement agreements, retain data ownership, maintain audit rights, and preserve the ability to switch providers will exercise genuine control over AI systems. Those that don't will find themselves renting not just technology, but the governance rules that come with it.
" }