The AI Risk Split: Why Your Finance Team Needs to Speak Two Languages
Financial institutions are under siege from AI-powered attacks, but many organizations are fighting the wrong battle by treating all AI risks as identical problems. The distinction between governance risks and technical risks is reshaping how banks, accounting firms, and financial services companies should defend themselves. Understanding this split isn't just academic,it determines whether your defenses actually work.
Why Financial Services Face an Unprecedented Attack Wave?
The numbers tell a sobering story. In 2025, financial services globally experienced 1,735 attacks per week, representing a 15% increase from 2024 . That's roughly 90,000 attacks annually targeting an industry that manages trillions in customer assets. The acceleration isn't random; it's driven by AI tools that make launching sophisticated attacks faster and easier than ever before.
What makes this moment different is the nature of the threats themselves. Automated, AI-directed attacks can probe networks and escalate privileges faster than human analysts can detect them . Phishing campaigns have become more convincing, with AI generating compelling messages that are harder for untrained employees to spot. In 2025, over 7,960 victims were listed on data-leak sites, a staggering 53% year-over-year increase . These aren't isolated incidents; they represent a fundamental shift in how criminals operate.
What's the Difference Between Governance Risks and Technical Risks?
Here's where most organizations stumble. When executives hear "AI risk," they often assume one solution fits all. But experts now emphasize that AI risks fall into two distinct categories that require completely different responses.
Functional risks concern governance, strategy, and business value. These are policy-level problems that live in the boardroom and the C-suite. Technical risks, by contrast, concern models, infrastructure, and security. These are engineering problems that require deep technical expertise .
The confusion matters because treating a technical problem as a governance issue, or vice versa, leaves you vulnerable. Consider a concrete example: an AI system that hallucinates information and makes up facts is a technical risk rooted in how the model was designed. But an organization that deploys unvetted AI tools without any oversight is facing a functional risk . Both are dangerous, but they demand different solutions.
"We talk about model bias or prompt injections or back doors, these are all concepts that need to be understood and, from an auditor standpoint, it's important to see how these are broken down. We need to keep it simple but also categorize these as functional and technical risks," said Vikrant Rai, managing director of risk advisory, internal audit and cybersecurity at Grant Thornton.
Vikrant Rai, Managing Director of Risk Advisory, Internal Audit and Cybersecurity at Grant Thornton
Functional risks include the absence of enterprise-wide AI policies, AI initiatives launched without return-on-investment modeling, rapid deployment without proper vetting, and the use of unvetted AI tools . Technical risks include adversarial manipulation of training data, model drift and deterioration, prompt injection attacks where users trick AI systems into ignoring their instructions, and insufficient technical controls .
How to Build AI Risk Controls That Actually Work
- Assess Your Functional Risks First: Establish enterprise-wide AI policies before deploying any system. Require return-on-investment modeling for AI initiatives and implement a formal vetting process for any AI tools your organization adopts. This prevents the governance gaps that create blind spots.
- Address Technical Vulnerabilities in Parallel: Work with your technical teams to monitor for model drift, implement defenses against prompt injection attacks, and ensure your AI systems have safeguards against adversarial manipulation. These require ongoing technical oversight, not one-time audits.
- Integrate AI Risk Into Your Broader Framework: Don't treat AI risk as a separate problem. Instead, fold it into your existing risk management structure so everyone understands the AI threat landscape. This ensures that both functional and technical risks get appropriate attention and resources.
The key insight is that one-size-fits-all approaches fail. Organizations need to understand which risks are governance problems and which are technical problems, then apply the right tools and expertise to each category .
What Regulations Are Shaping AI Risk Requirements?
The regulatory landscape is moving fast. As of now, 38 different states have passed laws regulating AI in some form, including New York's RAISE Act, California's Gen AI Training Data Transparency Act, Colorado's AI Act, Utah's AI Policy Act, and Texas's RAI Governance Act . While the specifics vary, all of them focus on responsible AI practices and the ability to explain and demonstrate how organizations are managing AI systems.
Beyond state laws, organizations can reference frameworks like the NIST AI Risk Management Framework (NIST AI RMF), ISO 42001, the OECD AI Principles, and others . Each framework emphasizes different aspects and accounts for different risks. The challenge is selecting the frameworks that align with your organization's specific needs and regulatory obligations.
"You will see a combo of standards and frameworks. You don't have to use all of them, but it is important to understand what regulations apply to you and what principles and risks you want to deal with and manage more effectively," said Vikrant Rai.
Vikrant Rai, Managing Director of Risk Advisory, Internal Audit and Cybersecurity at Grant Thornton
For financial services specifically, two regulatory requirements are reshaping security strategies. The Digital Operational Resilience Act (DORA) and PCI DSS 4.0 both signal a clear shift away from checkbox compliance toward ongoing monitoring, automated controls, and real-time visibility into security posture . Zero Trust principles, which assume no user or system is trustworthy by default, are becoming the primary framework for meeting these mandatory requirements .
Why Supply Chain Vulnerabilities Are the Weak Link?
Financial institutions invest heavily in hardening their own security, making them difficult targets. But attackers have learned to go around the fortress by targeting the vendors and third-party providers that connect to it. Most exploits in the financial sector originate from supply chain vulnerabilities . A compromised vendor can open the door to multiple simultaneous attacks against multiple targets, allowing attackers to spoof emails or directly access sensitive customer information .
This creates a cascading risk problem. Your organization might have excellent controls, but if a vendor or API provider becomes compromised, attackers could move laterally into your main financial network. The traditional "castle-and-moat" security model has dissolved, replaced by decentralized networks that inadvertently expose a far larger attack surface spanning from your data center to third-party service and API providers .
For accounting firms and financial services organizations, this means extending your risk management framework to include vendor security assessments. You can't control what your vendors do, but you can require them to meet certain security standards and maintain visibility into their security posture.
What Role Should Internal Auditors Play?
Internal auditors are uniquely positioned to navigate the AI risk landscape because they already understand how to assess both governance and technical risks across an organization. They can provide the independent perspective needed to reveal potential risks while also allowing for innovation with appropriate guardrails .
"Internal audit has always been well positioned to navigate new changes and AI is no different. We are here to provide that independent perspective that reveals potential risks but also allows for innovation with having those appropriate guardrails," said Alex Hinkebein, senior manager for risk advisory, internal audit and cybersecurity at Grant Thornton.
Alex Hinkebein, Senior Manager for Risk Advisory, Internal Audit and Cybersecurity at Grant Thornton
However, this requires auditors to understand both functional and technical AI risks, not just traditional IT controls. CPAs and auditors need to familiarize themselves with concepts like model bias, prompt injections, and data poisoning attacks. They don't need to become AI engineers, but they do need to understand how these risks can play out if left unaddressed .
The bottom line is clear: AI adoption is accelerating, and agentic AI systems that make autonomous decisions are becoming more common. Going back to the way things were is not an option . Organizations that succeed will be those that understand the difference between governance and technical risks, apply the right frameworks to each, and integrate AI risk management into their broader risk strategy. For financial services, where the stakes involve customer assets and regulatory compliance, this distinction isn't just important,it's essential.