Payments fraud is no longer about phishing emails or invoice tricks. Criminals are now using AI-generated deepfakes to impersonate executives and pressure finance teams into authorizing fraudulent transfers in seconds. A new report by Veriff found that attempted fraud in the payments sector jumped 89% over the past year, with a 3.6-fold increase in sophisticated AI-driven threats including deepfakes and digitally manipulated identities. For corporate treasurers, CFOs, and finance professionals, this shift represents a fundamental change in how fraud attacks work and who is most at risk. How Are Criminals Using AI to Impersonate Your Executives? Deepfake technology has made it remarkably easy for criminals to create convincing impersonations. Generative AI tools can now replicate voices, faces, and communication styles with startling accuracy, and the raw material is often publicly available. A finance team might receive an urgent payment request delivered through a staged video call, voice cloning, or a highly realistic fraudulent email that appears to come from a senior finance executive. The threat is particularly dangerous because it exploits established trust within organizations. A convincing deepfake impersonation can pressure employees into bypassing normal verification procedures, potentially exposing organizations to significant financial losses. What makes this attack vector so effective is that you do not need to be a high-profile treasurer or CFO to become a target. If you are responsible for initiating or approving high-value financial transactions, you are already a potential target. Publicly available audio or video from conference presentations, media interviews, webinars, podcasts, or earnings calls can be weaponized to generate convincing deepfakes in minutes. Why Are Executives' Personal Devices Becoming the Weak Link? Cybercriminals have discovered an easier path into corporate infrastructure: the personal digital lives of executives. By infiltrating executives' personal devices and home networks with malware, deepfakes, fake accounts, and ransomware, threat actors are exploiting these weaker touchpoints to gain entry into critical corporate infrastructure that was once considered secure. This shift reflects a broader understanding among criminals that corporate security perimeters are hardened, but personal devices often lack the same level of protection. The implications extend beyond immediate financial theft. A successful deepfake attack can undermine corporate credibility, erode customer and investor trust, and negatively affect sales, partnerships, and even stock prices. This is why treasury and payments teams must now view executive cybersecurity as a shared responsibility. How to Strengthen Your Payments Security Against AI-Driven Fraud - Implement Real-World Fraud Training: Finance teams should receive regular training that includes real-world examples of fraud attempts involving AI-generated voices, deepfake videos, and sophisticated email spoofing. Employees need to understand the tactics used by malicious actors and the potential consequences of compromised payment workflows. - Teach Recognition of Warning Signals: Train staff to identify red flags such as unexpected urgency, unusual payment requests, or deviations from established procedures and patterns. Building organizational cultures that encourage employees to question suspicious instructions, even when they appear to come from senior leaders, is critical. - Establish Multi-Factor Verification Protocols: Require out-of-band verification for high-value transactions. If a payment request arrives via email or video call, verify it through a separate communication channel before processing. This simple step can defeat most deepfake attacks. - Protect Executive Personal Digital Lives: Organizations should implement security measures to strengthen the personal digital lives and assets of their executives, including endpoint protection, multi-factor authentication on personal accounts, and awareness training about social engineering targeting personal devices. What Role Does Employee Training Play in Defense? While technology and internal controls remain vital, many experts agree that people remain the most critical line of defense against payments fraud. Training employees to recognize emerging fraud tactics can significantly reduce an organization's exposure to financial crime. According to the 2026 Cybersecurity Guide for CFOs by Eftsure, effective payments security requires finance teams to understand both the tactics used by malicious actors and the potential consequences of compromised payment workflows. "Technology and processes are only effective when employees understand why controls exist and how to enforce them, even under pressure," noted the Eftsure guide. Eftsure, 2026 Cybersecurity Guide for CFOs This is particularly important for treasury, accounts payable, and finance professionals who operate on the frontline of payment execution. Regular and tailored training programs should equip these teams with the knowledge to identify anomalies and challenge suspicious requests before financial damage occurs. Building a culture of awareness empowers employees to become an active layer of defense rather than passive targets. How Are Criminals Scaling AI-Powered Investment Scams? Beyond direct payments fraud, criminals are also using AI to scale investment scams at unprecedented levels. New research from Infoblox Threat Intel and Confiant reveals that cybercriminals are abusing Keitaro, a widely used advertising performance tracker, to hide scams and malware behind ordinary web traffic. Over a four-month period starting October 2025, researchers identified approximately 15,500 domains actively used for malicious Keitaro instances. These infrastructures cloaked everything from investment scams to information-stealing malware, with traffic flowing in from compromised websites, spam, social media, and online advertising. Among the threats abusing Keitaro, investment scams were by far the largest category. A recent trend within these scams is the use of AI as the central marketing hook. Pages routinely claim "Smart AI Trading Technology" or "Intelligent Trading Solutions" that automate trading and promise outsized returns, sometimes reinforced with deepfake imagery or video. Researchers also saw signs that generative AI is being used programmatically to mass-produce headlines, copy, and visuals for lure pages and ad imagery. This represents a new level of automation in fraud, where criminals no longer need to manually craft each scam variation. "For years, Keitaro has popped up in individual investigations, but no one had stepped back to ask how big the problem really is. We found that Keitaro frequently appeared in malicious campaigns, but the story really isn't about Keitaro; they are just one player in an ecosystem that malicious actors are using to scale and target attacks around the globe," explained Dr. Renée Burton, Vice President of Infoblox Threat Intel. Dr. Renée Burton, Vice President of Infoblox Threat Intel The research confirms that domain cloaking, implemented through traffic distribution systems and cloaking kits, is now a core component of cybercriminal operations. Rather than building bespoke infrastructure, many threat actors purchase or pirate commercial tracking software that already does what they need. Keitaro's feature-rich, self-hosted design and ease of deployment makes it attractive to both legitimate marketers and threat actors. What Should Organizations Do Right Now? The convergence of deepfake technology, AI-powered scams, and attacks on executive personal devices creates a complex threat landscape that requires both technological and human defenses. Organizations cannot rely on technology alone. The most effective defense combines stronger internal controls, specialized training, and a cultural shift that empowers employees to question suspicious requests. Treasury and finance teams should prioritize understanding the emerging fraud tactics, implementing verification protocols that defeat deepfakes, and protecting the personal digital lives of executives who have access to critical financial systems. The cost of inaction is measured not just in direct financial losses, but in corporate credibility and stakeholder trust.