Insurance brokers are confronting a fundamental problem: artificial intelligence is reshaping cyber risk faster than policy language can keep up, leaving businesses uncertain whether their coverage actually protects them from AI-driven attacks. As global cybercrime costs are projected to exceed $10 trillion annually, and the average data breach now costs over $4.5 million, the stakes have never been higher for getting cyber insurance right. Why Are Brokers Suddenly Unprepared for AI Threats? The shift is dramatic. A 2026 Check Point report found that global cyber attacks have surged by 70% since 2023, largely driven by AI adoption, with 89% of organizations encountering risky AI prompts. Meanwhile, UNESCO has flagged deepfake-driven fraud as a major threat in 2026, with 37% of fraud experts having encountered voice deepfakes and 29% video deepfakes. The problem is that most cyber insurance policies were written before these threats became widespread, leaving critical questions unanswered. "Insureds are wondering whether their cyber policy covers generative AI-caused losses, and carriers are cautiously assessing their portfolios to gauge systemic risk," explained Garrett Droege, fintech and digital assets leader for North America at WTW. "It is a brand-new risk that everyone is trying to understand as quickly as possible". The ambiguity runs deep. If an autonomous AI agent acting on behalf of a company creates a vulnerability that leads to a breach, does the policy respond? Are AI agents considered part of the insured? These questions lack clear answers, forcing brokers to navigate uncharted territory with their clients. What Coverage Gaps Are Hiding in Plain Sight? While AI threats grab headlines, the most damaging cyber claims remain rooted in familiar attack vectors. Business email compromise (BEC), funds transfer fraud, and social engineering continue to dominate loss activity across industries, with BEC alone responsible for tens of billions of dollars in global losses over the past decade. Yet these are precisely the areas where coverage is often most restricted. "These threats are frequently sub-limited or excluded," Droege noted. "We see large enterprises with hundreds of millions in cyber limits still facing meaningful sub-limits on key insuring agreements". This means that even companies with substantial cyber policies may face significant out-of-pocket costs when the most common attacks occur. The implications are stark. Brokers must stress-test policies against real-world loss events and challenge carriers on claims service standards. When cyber events unfold in seconds, not days, the quality of claims handling can determine whether a company survives the incident intact. How to Strengthen Your Cyber Insurance Strategy - Scrutinize AI Coverage Language: Work with carriers to explicitly define what AI-related losses are covered, including deepfake fraud, AI-generated phishing attacks, and autonomous agent vulnerabilities. Ensure policy wording addresses emerging scenarios rather than leaving them ambiguous. - Challenge Sub-Limits on High-Risk Exposures: Request detailed reviews of sub-limits on business email compromise, funds transfer fraud, and social engineering claims. Large enterprises should negotiate higher limits on these frequently exploited attack vectors. - Implement Continuous Threat Monitoring: Encourage clients to adopt real-time threat monitoring tools and integrate threat intelligence into their risk management programs. Static risk assessments are no longer sufficient in an environment where cyber events unfold in seconds. - Assess Third-Party Vendor Risk: Many losses originate not from the insured's own systems but from partners, suppliers, or service providers. Brokers should help clients understand vendor risk transfer and ensure partners maintain adequate cyber insurance coverage. - Establish Clear Claims Service Standards: Evaluate and challenge claims service commitments when selecting insurers. Brokers should ensure clients partner with carriers that can deliver rapid response when incidents occur. Why Older Adults Are Becoming Prime Targets for AI Scams? Beyond enterprise cyber insurance, a parallel crisis is unfolding in the consumer space. Americans age 60 and older lost an estimated $81 billion to fraud last year, according to federal data, and artificial intelligence is rapidly increasing the scale and realism of scams targeting this vulnerable population. Voice cloning, deepfake videos, and impersonation fraud have become so convincing that even skeptical victims struggle to detect them. The mechanics are straightforward and terrifying. A scammer can now clone a voice from just a few seconds of audio and impersonate a family member in distress, creating manufactured emergencies designed to bypass rational decision-making. Gary Schildhorn, featured in new educational materials, narrowly avoided sending money after receiving a call that used his son's cloned voice. "AI has made scams faster, cheaper and far more convincing," said Brian Long, co-founder and CEO of Adaptive Security. "A scammer can now clone a voice from a few seconds of audio and impersonate a family member in distress. Education is one of the most effective tools we have to stop this". In response, Adaptive Security has launched free public training resources designed to help families protect older adults from AI-enabled scams. The course is available at no cost in 14 languages, including the official United Nations languages, and includes real-world examples and guidance from cybersecurity and law enforcement experts. Brady Finta, a former FBI agent and founder and CEO of the National Elder Fraud Coordination Center (NEFCC), emphasized the psychological dimension of these attacks. "These scams are becoming more sophisticated and emotionally manipulative," Finta explained. "They are designed to create urgency and fear so people act before verifying the situation. Teaching people how to pause and confirm what's happening can prevent significant financial loss". What Practical Steps Can Families Take Right Now? The training materials emphasize simple but effective verification practices that can stop scams before money changes hands. These include calling family members back on trusted phone numbers rather than using numbers provided by the caller, creating family verification code words that only real family members would know, and avoiding financial decisions made under pressure. The course also shows families how easy it is to create and share their own deepfakes, giving older adults a safe way to see how convincing these tools have become before criminals use them for fraud. The broader message is clear: whether at the enterprise level or in family settings, AI-driven threats require a fundamental shift in how we approach security. Brokers, insurers, and families alike must move beyond reactive responses and build proactive defenses. For brokers, this means becoming trusted advisors who help clients navigate ambiguity and close coverage gaps. For families, it means education and verification protocols that can stop scams before they succeed. As cyber risk continues to evolve at an accelerating pace, the window for action is narrowing. Organizations that wait for perfect policy language or complete regulatory clarity will find themselves exposed. The time to act is now.