A new report from Vanderbilt Law School warns that Congress should begin planning now for a potential AI market crash, as trillions of dollars in infrastructure investment far outpace actual AI revenues. The analysis, conducted by Asad Ramzanali, Director of AI and Technology Policy at Vanderbilt's Policy Lab, argues that waiting until a crisis hits would be far more costly than proactive regulation today. What Makes the AI Investment Bubble Different from Previous Crises? The current AI investment landscape mirrors the conditions that preceded the 2008 financial crisis, but with a distinctly modern twist. Major chipmakers, cloud computing companies, and AI model developers are entangled in what experts call "circular equity" arrangements, where companies own stakes in their customers and suppliers. This creates a dangerous domino effect: if one major player faces financial trouble, the entire industry could cascade into insolvency. Beyond these equity tangles, the financial engineering underlying AI infrastructure is opaque and complex. Companies are using special purpose vehicles (SPVs), asset-backed securities, and highly-leveraged structures that obscure how much debt is actually involved. State and local governments are competing to offer tax breaks for data center construction, further distorting the market and hiding the true cost of AI infrastructure development. How Should Congress Prepare for an AI Market Correction? - Curtail Financial Engineering: Congress should prohibit large debt financings that don't disclose true sources of capital, end undisclosed debt-shifting practices, and require full transparency on all data center deals. The government should also prosecute any fraud or illegal activities related to AI financing, similar to how previous bubbles were addressed through criminal prosecution and prison time for those who defrauded the public. - Convert Stranded Assets into Public Infrastructure: If financial collapse occurs, companies may abandon data centers and computing infrastructure. Congress should authorize agencies to purchase these assets and convert them into a public cloud option, providing computing infrastructure for public purposes like drug discovery, disaster planning, and energy sustainability research. - Protect Workers from Mass Displacement: Companies are already engaging in "AI-washing," cutting jobs while blaming AI for workforce reductions. Congress should expand unemployment insurance, relax work requirements on social safety net programs, and consider creating a digital Works Progress Administration (WPA) modeled after the Great Depression-era employment program to address potential mass job losses. - Reform AI Markets Structurally: Congress should establish a "Glass-Steagall for AI" that separates algorithms and software from data centers and hardware, preventing the same companies from dominating both sides of the market. Utility-style regulations should govern digital utilities like foundation models and cloud computing, administered by a new digital regulatory agency. - Ban Surveillance-Based Business Models: A financial crash could accelerate the shift toward extractive business models, including surveillance-based pricing and wage discrimination. Congress should directly ban these practices as part of a broader privacy regime to prevent further entrenchment of surveillance advertising and pricing. Ramzanali's proposals also address worker surveillance, which is likely to intensify during an economic crisis as companies seek to boost productivity. Congress should establish limits on AI-enabled workplace monitoring practices that harm worker welfare. Meanwhile, a coalition of global leaders, including Nobel Prize winners and former heads of state, is calling for immediate government action on AI governance from a different angle. The Elders, a group of prominent international figures, emphasize that governments must prioritize public safety over corporate profit in AI regulation. "A government's first responsibility is to protect its citizens. As the scale of AI capability accelerates exponentially, the current gap in governance is becoming a crisis," stated Juan Manuel Santos, former President of Colombia and Chair of The Elders. Juan Manuel Santos, former President of Colombia and Chair of The Elders What Are the Most Urgent Global AI Governance Challenges? The Elders identify three critical areas demanding immediate international action. First, militaries are integrating commercial AI systems into weapons prematurely, and these systems are already enabling violations of international law. The biological, chemical, and nuclear risks posed by unregulated military AI could be catastrophic. Second, AI systems are enabling mass surveillance, discrimination, and erosion of civil liberties. AI-driven political disinformation is undermining truth and exacerbating a breakdown in public trust. Third, AI data centers already consume more electricity than entire countries and are depleting water reserves in drought-affected regions, with these environmental harms falling disproportionately on vulnerable populations. The Elders reject the common argument that governments cannot regulate AI effectively because technology moves too quickly or because companies will self-regulate. They argue that these narratives are misleading and that there is nothing inevitable about how AI develops. Instead, who benefits and who is harmed by AI is a shared global challenge that requires collective action, not a race between countries or corporations. The convergence of these warnings, from both U.S. financial policy experts and international governance leaders, suggests that AI regulation is no longer a matter of "if" but "when." The question facing policymakers is whether they will act proactively now or scramble to respond after a crisis has already begun.