The U.S. Treasury Department is pushing to loosen regulations around artificial intelligence in banking, arguing that overly strict rules actually make the financial system less safe. Starting this week, the Treasury's Office of the Financial Stability Oversight Council (FSOC) and its Artificial Intelligence Transformation Office (AITO) are launching the AI Innovation Series, a set of public-private conferences designed to identify which regulations should be reduced or eliminated to help banks deploy AI tools more freely. The reasoning sounds counterintuitive: Treasury officials say that when banks cannot quickly adopt AI for fraud detection, credit decisions, and risk management, the entire financial system becomes less efficient and potentially less secure. But this regulatory pivot is colliding head-on with growing concerns from risk experts, consumer advocates, and even the Treasury's own watchdog agency about the dangers AI poses to financial stability. What Is the Treasury Actually Proposing? The AI Innovation Series will hold four roundtables bringing together banks, technology companies, regulators, and specialized experts to discuss which federal rules are slowing down AI adoption and how to streamline them. Treasury Secretary Scott Bessent framed the initiative as a shift in philosophy: moving away from a regulatory posture focused on "constraint" toward one that recognizes "failure to adopt productivity-enhancing technology as its own risk". The Treasury's argument rests on a simple premise. AI tools are already embedded in core banking functions like fraud detection, credit underwriting, pricing, trading, and customer interactions. If regulations prevent banks from deploying these tools effectively, the argument goes, the financial system becomes vulnerable. Christina Skinner, deputy assistant secretary for FSOC, stated the case directly: "When institutions cannot deploy tools that improve fraud detection, credit allocation, and operational resilience, the system becomes less efficient and less secure". Paras Malik, the Treasury's Chief AI Officer, added that the priority now is "operationalization, embedding AI into core workflows in ways that measurably enhance risk management and resilience". The message is clear: the government wants to accelerate AI adoption in finance. Why Are Risk Experts Pushing Back So Hard? While the Treasury sees regulatory relief as essential, a major Canadian financial industry forum reached a starkly different conclusion. The Global Risk Institute (GRI), in partnership with Canada's top financial regulators including the Office of the Superintendent of Financial Institutions (OSFI) and the Bank of Canada, conducted extensive research on AI risks in banking. Their findings suggest that AI is moving faster than governance frameworks can handle. The GRI's Financial Industry Forum on AI identified three critical vulnerabilities that looser regulations might exacerbate: - Governance Gaps: AI is now a strategic governance issue that requires board-level oversight, but many institutions lack clear accountability structures for AI-driven decisions and outcomes. - Operational Resilience Strain: As banks depend more heavily on AI tools, cloud infrastructure, and external data providers, they become vulnerable to concentrated risks; a single outage or compromise at a major AI vendor could impact multiple institutions simultaneously. - Workforce Readiness Deficits: Employees across front, middle, and back-office functions need training to understand what AI can do and where it can fail, yet many institutions lack this AI literacy. The GRI report emphasized that "AI has the potential to amplify risks inherent in financial activities, and these risks can affect consumers, investors, financial institutions, and financial markets". This is not a minor concern. Allianz's 2026 Risk Barometer ranked artificial intelligence as the second-highest global business threat, jumping from lower rankings in previous years, with cyber incidents remaining the top threat for the fifth consecutive year. How to Manage AI Risk While Enabling Innovation? The tension between the Treasury's push for deregulation and the GRI's warnings about systemic risk points to a critical question: how can regulators enable AI innovation without creating new vulnerabilities? Experts suggest several practical approaches: - Board-Level Accountability: Establish clear governance structures where boards and executive teams have visibility into where AI is used, how it is monitored, and who is responsible when things go wrong. - Third-Party Risk Management: Strengthen oversight of technology and data dependencies, particularly concentration risk in cloud and AI service providers that could create systemic vulnerabilities. - Cyber and Operational Safeguards: Reinforce basic controls including strong cyber hygiene, rigorous vendor management, and clearer oversight of technology supply chains beyond direct institutional control. - Workforce Training Programs: Invest in AI literacy across all levels, from engineers and risk teams to executives and board members, so employees can interrogate AI outputs and detect failures. The GRI's research introduced an "AGILE" framework for navigating AI risk, though the specific details of that framework were not fully elaborated in available materials. The broader point is that governance must evolve alongside deployment rather than playing catch-up after problems emerge. Sonia Baxendale, president and CEO of the GRI, stressed that "managing AI risk is no longer confined to individual institutions. It requires collaboration across the financial ecosystem". This suggests that regulatory relief without corresponding governance improvements could create systemic vulnerabilities. What Are Consumer Advocates Saying About This Shift? The Treasury's deregulation push has already drawn criticism from a broad coalition of organizations. In December, dozens of groups including technology, consumer, and civil rights organizations wrote to the House Financial Services Committee opposing a bill that would reduce AI oversight in finance. Their message was direct: "These potential benefits for consumers, customers, investors, markets, and the financial system can only materialize if people are protected from the many risks of AI in financial services through the consistent application and enforcement of federal civil rights, consumer protection, investor protection, market integrity, and financial supervision statutes and regulations". The Government Accountability Office (GAO), Congress's watchdog agency, has documented real risks from AI in banking. In a report last year, the GAO noted that while banks and investment companies are using AI for fraud detection, credit decisions, and risk management, the technology poses genuine threats including cybersecurity vulnerabilities, privacy breaches, biased lending decisions, and data quality problems. The regulatory environment is also tightening globally. In Europe, the EU Artificial Intelligence Act is set to take effect in August 2026 and will require disclosure of deepfakes and other AI-generated content. This creates a potential compliance headache for U.S. banks operating internationally if American regulations diverge significantly from European standards. What Happens Next? The AI Innovation Series represents a critical moment for U.S. financial regulation. The Treasury is betting that deregulation will accelerate AI adoption and strengthen the financial system. But the GRI's research and the GAO's warnings suggest that governance and risk management must keep pace with technological deployment. The outcome of these conferences could shape whether AI becomes a tool that makes banking safer and more efficient, or a source of new systemic vulnerabilities that regulators are unprepared to manage. For now, the Treasury is moving forward with its deregulation agenda, while experts continue to warn that the risks of moving too fast may outweigh the benefits of moving too slow.