The AI regulatory landscape is moving faster than the laws designed to govern it. Most regulations passed in the last few years are already outdated, failing to address emerging technologies like autonomous AI agents and self-updating models that experts expect to become significant in the coming years. This gap between innovation and regulation is forcing IT leaders to rethink how they prepare for compliance with rules that don't yet exist. What Are Today's AI Laws Actually Missing? Current AI regulations focus on a narrow slice of the technology landscape. Most laws target frontier models, high-risk AI systems, and transparency requirements, with particular emphasis on large language models (LLMs), voice deepfakes, and video deepfakes. However, this regulatory focus hasn't kept pace with what's actually being built in AI labs. The biggest blind spot involves system-to-system AI interactions. Current laws are designed around the assumption that humans will interact with AI systems and need to know when they're being used. But as AI systems begin communicating with each other without human intermediaries, existing regulations become largely irrelevant. Additionally, emerging technologies like world models for smart robots and agentic systems, which develop new skills by interacting with other agents or referencing materials like company documents and research papers, fall outside the scope of most current legislation. "Some obligations, like human oversight, are going to be really challenging when it comes to things like AI agents," said William Dunning, managing associate for AI regulation at UK-based law firm Simmons & Simmons. William Dunning, Managing Associate for AI Regulation at Simmons & Simmons The uncertainty is real. Legal experts acknowledge that there's considerable ambiguity about how current AI regulations will apply to these emerging technologies, creating a compliance gray zone for companies developing next-generation AI systems. How Is the Regulatory Landscape Fragmenting Globally? Rather than converging on a single global standard, AI governance is becoming increasingly multipolar, with three distinct approaches emerging. The European Union has adopted a risk-based regulatory framework centered on oversight and detailed rules, using the scale of its single market to establish international benchmarks through mandatory compliance requirements. The United States, by contrast, has leaned toward a more relaxed approach that prioritizes innovation and deregulation, aiming to unlock the private sector's innovative capacity by loosening federal oversight. China has charted a third path, balancing development with security and innovation with governance through initiatives like the Global AI Governance Initiative. Within the United States, the fragmentation goes even deeper. No federal AI regulations are expected anytime soon, leaving individual states to publish their own rules. California, for instance, is focused on transparency, watermarking, and how AI will affect individuals and groups. This patchwork approach means companies operating across multiple jurisdictions must navigate conflicting requirements simultaneously. Steps for Building AI Governance Before Regulations Catch Up Legal experts and technology leaders agree that companies cannot wait for perfect regulations to emerge. Instead, organizations should establish foundational governance frameworks now that will make future compliance easier. Here are the key steps to take: - Conduct a Complete AI Inventory: Lawyers, IT leaders, and engineers need to work together to identify every AI tool in use across the organization. This includes seemingly benign applications like Microsoft Copilot, which still require documented usage guidelines and defined parameters for how they should be utilized. - Bridge the Technical-Legal Gap: Effective AI governance requires collaboration between legal teams and engineers. Lawyers interpret regulations but lack technical expertise, while engineers understand the systems but may not grasp compliance implications. Both perspectives are essential to operationalize governance measures that actually work in practice. - Establish Cross-Functional Oversight: Create governance structures similar to cybersecurity programs, where engineers work with management to identify gaps, assess risks, and implement safeguards. Engineers can identify missing pieces and steps needed to reach compliance targets, while lawyers translate those technical realities into governance policies. The analogy used by legal experts is instructive. As one expert noted, the goal of AI regulation should be comparable to airplane travel: people should feel safe using AI the way passengers feel when boarding a plane. This requires systematic oversight, clear safety standards, and accountability mechanisms. What Happens When Regulations Finally Arrive? The enforcement phase is already beginning. The EU AI Act, which passed in 2024, is entering enforcement this year, though significant ambiguity remains about what will actually be enforced, creating uncertainty for businesses. The shift from policymaking to enforcement represents a critical turning point where companies that have built governance foundations will have a substantial advantage over those scrambling to comply at the last minute. Beyond regulatory fines, companies face another enforcement mechanism: product liability litigation. If AI systems cause harm, existing legal frameworks like product liability law will likely be invoked to compensate victims and deter companies from causing that harm. This means governance isn't just about regulatory compliance; it's about managing legal exposure and protecting the organization from lawsuits. Looking ahead, regulation is expected to expand significantly. Future rules will likely cover workplace AI applications, including how AI is trained, bias in hiring and interviews, and methods for distinguishing between AI-generated and human-generated content. Companies that have already established governance practices in these areas will adapt more easily than those starting from scratch. The bottom line is clear: the regulatory environment will continue evolving faster than any single law can accommodate. Organizations that treat AI governance as a foundational business practice rather than a compliance checkbox will be better positioned to navigate whatever rules emerge next. The time to build that foundation is now, before regulations force the issue and penalties make the cost of non-compliance prohibitive.