AI Companies Are Spending Millions to Shape Regulation. Here's What They're Actually Doing.
AI companies face a serious image problem in the United States, and they're responding with an aggressive strategy: funding thinktanks, publishing policy papers, and spending millions on lobbying to reshape how the public and policymakers think about their technology. OpenAI spent nearly $3 million on lobbying in 2025 alone, while simultaneously launching initiatives designed to appear thoughtful about AI's societal impact. The strategy reveals a fundamental tension: these companies publicly advocate for stronger oversight while privately working to block regulation that could constrain their business .
Why Are AI Companies Suddenly Focused on Policy and Public Relations?
The timing is no accident. Public trust in AI is collapsing. A Pew Research Center survey found that only 16% of Americans believe AI will help people think more creatively, and just 5% think it will help people form meaningful relationships. An NBC News poll showed that only 26% of voters have a favorable opinion of AI overall . For companies whose entire business model depends on widespread adoption and favorable regulation, these numbers represent an existential threat.
OpenAI's response has been multifaceted. The company published a 13-page policy paper titled "Industrial Policy for the Intelligence Age" that calls for reimagining the social contract around AI. It also acquired a tech-friendly podcast network, announced plans to open a Washington DC office with a dedicated space for policymakers to learn about its technology, and launched various public-facing initiatives . Anthropic, OpenAI's main rival, announced its own thinktank called the Anthropic Institute with similar goals of exploring how AI growth would disrupt society .
Sam Altman, OpenAI's CEO, acknowledged the problem directly at BlackRock's investment conference in Washington DC. He noted that AI faces significant headwinds in public perception, explaining that datacenters are being blamed for electricity price hikes and that companies are using AI as a scapegoat for layoffs regardless of whether the technology actually caused them .
What's Actually in These Policy Papers, and Is It Genuine?
OpenAI's policy paper includes headline-grabbing proposals like a four-day work week and the creation of a "public wealth fund" that would return AI-generated profits directly to citizens. These ideas echo the tech industry's long-standing interest in universal basic income. The paper frames these proposals as "a starting point for a broader conversation" rather than firm policy recommendations .
However, critics argue the paper is fundamentally a public relations exercise that shifts responsibility away from the companies themselves. The document describes an AI-dominated world as inevitable, presenting the technology as an unstoppable force rather than a product that can be regulated internally or through legislation. This framing allows companies to advocate for social welfare goals while avoiding any meaningful commitment of resources to achieve them .
"What they've done very cannily here is sort of outline a set of social welfare goals while abdicating any responsibility or any meaningful commitment of resources toward those goals," said Sarah Myers West, co-executive director at the non-profit AI Now Institute.
Sarah Myers West, Co-Executive Director, AI Now Institute
The gap between OpenAI's public statements and its private actions is striking. While the company's policy paper calls for guardrails on safe AI and suggests lawmakers create oversight mechanisms, the company has simultaneously lobbied successfully for an administration that has taken an aggressively deregulatory stance toward AI .
How Are AI Companies Using Lobbying and Political Influence to Shape Regulation?
The lobbying effort extends far beyond policy papers. OpenAI's president, Greg Brockman, co-founded a pro-AI Super PAC (political action committee) that raised more than $125 million in 2025. This PAC has already run political advertisements in New York against congressional candidates who favor AI regulation. The company is also backing a bill in Illinois that would shield AI firms from liability in cases where their models cause serious societal harms, such as creating chemical weapons or causing mass deaths .
Anthropic has pursued a different but equally aggressive strategy, spending more than $3 million on its own lobbying efforts and backing a separate Super PAC with goals more welcoming of regulation. Despite this apparent difference, the AI industry remains broadly aligned with the Trump administration, which continues to act in its interest .
The Trump administration has actively worked to block state-level AI regulation, adopting the industry's argument that a patchwork of state laws would hamper innovation and economic growth. The administration signed a legally contested executive order attempting to block states from imposing limits on AI. More recently, the White House pressured a Republican Utah state senator not to propose a bill calling for transparency and child protection regulations on AI .
Steps to Understand AI Regulation Efforts and Their Implications
- Track Lobbying Spending: Monitor how much AI companies spend on lobbying each year and which specific bills they support or oppose. This reveals the gap between their public statements and private interests.
- Examine Policy Paper Claims: When AI companies publish policy proposals, compare them to their actual business practices and lobbying positions to identify inconsistencies between rhetoric and action.
- Follow State-Level Regulation: Pay attention to state bills addressing AI transparency, child safety, and liability, as these represent the most direct challenges to industry influence at the federal level.
- Assess Thinktank Independence: Evaluate whether thinktanks and research institutes funded by AI companies maintain editorial independence or primarily advance their funders' interests.
Experts argue that the AI industry is taking advantage of a significant knowledge gap at the state government level. State legislatures typically have short sessions and limited staff, making them vulnerable to industry arguments that any AI regulation will stifle innovation. This structural disadvantage has given AI companies an opening to shape regulation before it becomes entrenched .
"They're taking advantage, essentially, of the fact that these folks have short sessions and no staff, to convince them that any regulation of AI will stifle innovation," said Caitriona Fitzgerald, deputy director of the Electronic Privacy Information Center.
Caitriona Fitzgerald, Deputy Director, Electronic Privacy Information Center
The broader concern among regulation advocates is that while companies wait for Congress to act, they will continue operating largely unregulated. This is precisely what the industry wants. By funding thinktanks and publishing policy papers that call for regulation while simultaneously lobbying against specific regulatory measures, AI companies are attempting to control the terms of the debate itself .
The disconnect between OpenAI's public advocacy for oversight and its private lobbying against regulation reveals a calculated strategy: appear thoughtful and responsible while working behind the scenes to preserve the status quo. As public disapproval of AI grows and regulation becomes an increasingly central issue in political campaigns, this tension between image management and actual policy influence will likely intensify.