The AI Compliance Gap: Why New Laws Mean Nothing Without Enforcement Funding
AI governance has matured from abstract principles into concrete laws across the EU, US states, and internationally, but a critical gap now threatens to undermine years of progress: enforcement agencies and oversight bodies are starved of funding while policymakers chase the next big idea. The result is a dangerous mismatch between ambitious regulatory frameworks on paper and insufficient capacity to make them real .
What Is AI Security Compliance, and Why Does It Matter Now?
AI security compliance has evolved from a niche ethics concern into a core data security issue that directly affects organizations across every sector. AI security compliance is the practice of applying security controls, data governance policies, and regulatory requirements specifically to how AI systems are deployed, used, and monitored within an organization . It addresses the potential data risks that emerge when employees, developers, or automated systems interact with AI tools, including what data flows into those tools, how outputs are handled, and whether the organization can demonstrate proper AI oversight to auditors or regulators.
The shift from treating AI compliance as a legal and ethics concern to recognizing it as a data security issue has fundamentally changed who owns the problem. When AI compliance was limited to bias and fairness, legal and ethics teams led. Now that AI compliance involves data flows, access controls, and auditability, security teams must drive the compliance program .
Which Regulations Actually Apply to AI Systems Today?
Organizations already operate under overlapping regulatory frameworks that now extend to AI systems, whether or not those frameworks were originally written with AI in mind . The operative question regulators ask is not whether data entered a machine learning model, but whether the organization can account for where the data went and who had access.
- General Data Protection Regulation (GDPR): Applies to AI systems that process personal data. When an employee submits a document containing customer records to a generative AI tool, the organization's GDPR obligations for that data do not pause. The AI tool is a third-party processor, and the organization remains accountable for what happens to the data.
- Sector-Specific Frameworks: Healthcare organizations must account for HIPAA when AI tools process protected health information. Financial services firms operate under frameworks like SOX and PCI DSS that impose audit and data integrity requirements. Law firms and investment managers face sector-specific confidentiality obligations that apply regardless of whether information is processed by a human or an AI system.
- EU AI Act: The first international AI-specific compliance obligation follows a risk-tiered approach. AI systems classified as high-risk, including those used in HR decisions, credit scoring, and certain security applications, face mandatory transparency, human oversight, and audit trail requirements. Organizations that deploy or use these systems in the EU or on EU residents must document risk management processes and maintain technical documentation that demonstrates compliance.
- NIST AI Risk Management Framework: In the US, the NIST AI Risk Management Framework (AI RMF) has emerged as the primary voluntary standard for AI governance. While not a regulation, auditors, customers, and enterprise security programs increasingly reference the AI RMF as a baseline.
Why Are Organizations Failing to Implement AI Compliance?
The gap between AI policy and AI compliance enforcement is where most programs fail. Three patterns account for the majority of breakdowns . First, shadow AI occurs when employees adopt AI tools without IT or security review. According to research, 32.3% of ChatGPT usage occurs through personal accounts, as does 24.9% of Gemini usage. Claude and Perplexity see even higher rates of personal account usage, at 58.2% and 60.9% respectively . Security teams cannot see what data is flowing into these tools and have no way to demonstrate oversight to an auditor.
Second, most organizations have issued AI acceptable use policies, but far fewer have implemented controls that make those policies enforceable in real time. A policy that says "do not submit customer data to unapproved AI tools" is not a compliance control; it is a statement of intent. Compliance requires that the behavior be monitored, that violations trigger a response, and that the organization can produce evidence of both for an auditor .
Third, standard audit infrastructure, including network logs, data loss prevention (DLP) alerts, and access records, was not built to capture the full context of an AI interaction: what data was submitted, what the system returned, and what happened to the output .
How to Build Effective AI Compliance Programs
- Implement Real-Time Monitoring: Move beyond written policies to enforceable controls that monitor behavior, detect violations, and generate evidence for auditors. This requires visibility into which AI tools are in use across the organization and what data is entering desktop or browser-based AI tools.
- Establish Data Governance Across Three Layers: Address the AI tools themselves, including third-party applications and built-in AI features in existing software; the data that flows through those tools; and the organizational processes that govern who can use AI and for what purpose.
- Maintain Audit Trails and Documentation: Organizations must know where personal data goes, maintain processing records, and demonstrate the ability to fulfill data subject rights. Under the EU AI Act, high-risk system operators must maintain logs and documentation that show compliance with regulatory requirements.
- Coordinate Across Security and Legal Teams: AI compliance is no longer solely a legal and ethics concern. Security teams must be in the room and increasingly driving the compliance program, working alongside legal, ethics, and business teams.
The Funding Crisis Threatening AI Governance Worldwide
Even as governments pass AI regulations, philanthropies and enforcement agencies are failing to sustain the institutions responsible for making those rules real. The central challenge in AI policy today is not how to invent new principles but how to ensure that the hard-won progress of recent years is implemented, enforced, and sustained .
Years of philanthropic investment have built AI expertise and institutions across dozens of countries. Universities have launched interdisciplinary AI governance programs. Civil society organizations have developed tools to audit automated decision-making systems, assess algorithmic risk, and support communities harmed by AI. Regulatory agencies in many states are no longer starting from scratch; they draw on shared playbooks and peer networks developed through philanthropic support .
Yet this is precisely the moment when progress is most fragile. Too often, funding shifts away just as laws are passed, oversight bodies are formed, and courts begin to confront algorithmic decision-making in practice. Some early philanthropic initiatives in AI ethics have pivoted toward AI safety or technical research agendas, leaving gaps in support for implementation and accountability .
"The risks include fragmentation, short-termism, and the quiet abandonment of institutions with regulatory responsibility, such as consumer protection agencies, courts, independent oversight bodies, and data protection authorities," noted Merve Hickok and Marc Rotenberg, president and founder of the Center for AI and Digital Policy.
Merve Hickok and Marc Rotenberg, Center for AI and Digital Policy
Implementation of laws and standards requires sustained funding of existing institutions as enforcement agencies and courts begin applying new rules. In California, an AI transparency law was recently upheld over industry objection. The law mandates that AI developers publish summaries of their AI training data in areas such as employment, housing, and lending. Bolstering efforts like these requires funding technical expertise within regulatory agencies and independent research on the discriminatory effects of AI .
Supporting litigation and legal analysis is also critical, as is funding public-interest organizations that can monitor compliance, bring complaints, and ensure civil society has a role in the rulemaking and enforcement process. Without effective enforcement, even the best designed governance frameworks risk becoming paper promises .
What Happens If Enforcement Funding Dries Up?
The window of opportunity is narrow. Public opinion polls continue to show widespread concern about the impact of AI and support for new safeguards. US courts are beginning to grapple with cases in areas such as algorithmic pricing and bias in criminal sentencing algorithms. International bodies are setting precedents that will shape AI governance for decades .
If sustained support is not maintained, however, much that was accomplished could be wiped away through regulatory rollback, institutional fatigue, or the simple absence of resources to enforce the rules already on the books. For funders, particularly those with long-standing commitments to technology, civil rights, and democracy, the message is straightforward: this is not a field in need of reinvention. It is a field in need of reinforcement through multi-year commitments, support for enforcement capacity, and sustained investment in the institutions now responsible for democratic oversight of AI .
Whether existing AI governance policies become durable safeguards or empty promises will depend in no small part on the choices that philanthropies, governments, and regulatory agencies make now.