Financial institutions across Canada face an uncomfortable paradox: the biggest risk they face isn't moving too fast with artificial intelligence, but rather doing too little to prepare for it. A major report from the Financial Industry Forum on Artificial Intelligence (FIFAI II), led by the Global Risk Institute and Canada's top financial regulators, reveals that AI has already embedded itself into critical banking functions like fraud detection, compliance, and trading decisions. The real danger now is that plan sponsors and financial leaders who hesitate to act decisively could find themselves blindsided by threats they saw coming but failed to address. Why Is AI Suddenly a Board-Level Risk? For decades, financial institutions managed risk through established frameworks and human oversight. But AI changes the equation fundamentally. The technology now powers decisions that affect millions of customers and trillions of dollars in assets, yet many boards still lack basic literacy about how these systems work, what they can do, and where they fail. FIFAI II frames AI risk as a strategic issue that demands the same attention boards give to capital adequacy or cybersecurity. The report emphasizes that institutions face a genuine dilemma: move too quickly without proper risk management and you create operational chaos and consumer harm; move too slowly and you miss competitive advantages while threats evolve faster than your defenses. One participant in the forum captured this tension bluntly: "The biggest risk is not doing enough". "AI is a transformative force, both awe-inspiring and potentially perilous. Its true impact will hinge on disciplined, responsible innovation and robust collaboration across borders and sectors," said Peter Routledge, Superintendent at OSFI. Peter Routledge, Superintendent at OSFI What Specific Threats Are Catching Regulators' Attention? The threats are concrete and escalating. Deepfake attacks, which use AI to create convincing but fake audio and video, have increased twentyfold over the past three years, according to data from the Federal Reserve Board of Governors. These aren't theoretical risks; they're happening now. Criminals are using AI-generated voice clones to impersonate customers and employees, which is why 91 percent of financial institutions globally are reconsidering their voice-verification systems. Beyond deepfakes, a new threat called "Fraud-as-a-Service" has emerged. Criminals can now purchase turnkey AI tools that dramatically increase the scale, speed, and sophistication of financial fraud attacks. This commodification of fraud means that even unsophisticated bad actors can launch attacks that rival those of organized crime networks. On the market side, FIFAI II warns that AI-powered trading models trained on similar data may move in concert during periods of stress, potentially intensifying short-term volatility and creating what regulators call "procyclical shifts" in financial markets. Agentic AI systems, which act autonomously and make multi-step decisions at machine speed, could amplify funding outflows and destabilize balance sheets during crises. How Should Financial Leaders Respond to These Risks? FIFAI II recommends a structured approach called the AGILE framework, which provides a roadmap for institutions to balance innovation with prudent risk management: - Awareness: Anticipate AI-driven threats ranging from macro disruption to market volatility and disinformation, then build these scenarios into stress tests so leadership understands potential impacts. - Guardrails: Maintain strong, adaptive controls over data quality, consumer protection, and third-party relationships, with clear accountability for AI outcomes and decision-making. - Innovation: Deploy AI to improve fraud detection, cyber defense, compliance monitoring, and operational efficiency rather than viewing AI purely as a risk. - Learning: Scale AI literacy across boards, executives, staff, and consumers so everyone understands capabilities and limitations of these systems. - Ecosystem Resiliency: Strengthen information-sharing between institutions, develop crisis-response playbooks, and establish common standards for critical third parties and digital identity. The report also emphasizes that boards and senior management should improve AI literacy so leaders understand "AI capabilities, risks, and limitations," establish explicit AI executive oversight where it doesn't already exist, embed horizon scanning into standard risk practice, and keep governance frameworks "evergreen" as technologies like agentic AI and quantum computing advance. What About the Supply Chain Problem? One often-overlooked risk is concentration in AI and cloud providers. FIFAI II highlights how dependent financial institutions have become on a small number of vendors. The July 2024 CrowdStrike outage caused approximately $5.4 billion in losses for Fortune 500 companies, excluding Microsoft, illustrating the systemic impact of single points of failure. Many AI services involve complex "nth-party" chains where failures at any layer can propagate across institutions. Even large Canadian firms may have limited leverage over contractual terms, operational transparency, or remediation timelines with these vendors. To address this, FIFAI II recommends that institutions map deeper supply-chain dependencies, set concentration limits on any single vendor, develop exit and substitution plans, and test scenarios that assume correlated disruption across multiple providers. How Can Institutions Build Better AI Fraud Detection? While risks are real, AI also offers powerful defensive capabilities. Feedzai, a leading financial crime prevention company, recently unveiled RiskFM, an AI foundation model purpose-built for detecting fraud and money laundering across the entire financial crime lifecycle. Unlike traditional approaches that rely on manually-engineered machine learning models built one customer at a time, RiskFM represents a fundamental shift in how financial institutions approach fraud prevention. Foundation models are large AI systems trained on massive datasets to understand patterns and make predictions across multiple related tasks. RiskFM was trained on an exceptionally broad dataset spanning onboarding, digital activity, payments, transfers, and anti-money laundering workflows, enabling it to detect and adapt to financial crime with unprecedented speed and precision. "Foundation models have reshaped language, vision, and audio, but financial crime has remained stubbornly resistant to that wave. Feedzai's RiskFM is a credible attempt to close that gap," explained Sam Abadir, research director for risk, financial crime, and compliance at IDC. Sam Abadir, Research Director, Risk, Financial Crime, and Compliance at IDC What makes RiskFM different is that it can match the performance of highly-tuned, custom-built models on day one without requiring months of manual feature engineering and data preparation. When trained across multiple institutions and geographies simultaneously, RiskFM actually outperforms traditional machine learning approaches based on gradient boosting and deep learning, and continues improving as it ingests more data. This means faster deployment, lower implementation costs, and significantly reduced maintenance burden compared to legacy systems. The model is designed to expand across the full range of financial crime prevention, from mule account detection to anti-money laundering compliance, providing institutions with a scalable intelligence layer that grows with their needs. What's the Bottom Line for Plan Sponsors? The message from Canadian regulators is clear: the greatest risk of AI is failing to act decisively. Financial institutions that wait for perfect clarity or zero risk will find themselves outpaced by competitors and outmaneuvered by criminals who are already deploying AI at scale. The financial sector drives up to 8 percent of Canada's GDP, making this not just a corporate governance issue but a matter of national economic importance. Plan sponsors who oversee retirement savings for millions of Canadians must move "dynamically to capture AI's benefits while responding to fast-evolving risks," according to FIFAI II. This means investing in board education, establishing clear governance structures, deploying AI-powered fraud detection and compliance tools, and building resilience into technology supply chains. The cost of inaction, regulators warn, will ultimately exceed the cost of responsible innovation.