The One Simple Trick That Actually Works Against AI Voice Scams

Artificial intelligence has become so good at faking reality that even digital forensics experts struggle to tell real from fake, but a UC Berkeley researcher says the solution isn't high-tech detection,it's a simple family code word. With just 15 seconds of audio, AI can now clone a person's voice convincingly enough to fool loved ones into sending money in a crisis . The problem is accelerating faster than defenses can keep up, leaving older adults particularly vulnerable to scams that exploit emotional urgency rather than technical sophistication.

Fraud losses among adults 60 and older have surged dramatically in recent years, rising from approximately $600 million in 2020 to $2.4 billion in 2024, according to a 2025 Federal Trade Commission (FTC) report . Much of that increase was driven by cases in which victims lost more than $100,000, often through investment schemes, impersonation scams, or online relationships that turned fraudulent. Increasingly, these scams are powered by AI tools that are becoming cheaper and easier to use every month.

Why AI Scams Are Harder to Detect Than Ever?

The speed of AI improvement is outpacing human adaptation. Hany Farid, a UC Berkeley professor who has spent more than two decades studying manipulated media, explained the scale of the challenge .

"We used to measure progress in years. Now it's happening in weeks," Farid said.

Hany Farid, Professor at UC Berkeley

The visual glitches that once gave away deepfakes are rapidly disappearing. Faces move naturally, lighting behaves correctly, and the small imperfections that forensic experts once relied on to spot fakes are becoming nearly impossible to detect. The real danger, however, is not viral content circulating online,it's the personal messages and calls that feel urgent and intimate. Criminals can now clone a loved one's voice, impersonate them on a phone or video call, and create a fabricated crisis that demands immediate action. In those moments of panic, hesitation disappears, and with it, sometimes, life savings.

How to Protect Yourself and Your Family From AI Scams

  • Create a family code word: Agree on a secret word with loved ones that only you would know. When you receive an urgent call, ask for that word before taking any action. This creates a moment of pause and a way to verify who is actually on the other end of the line.
  • Always call back on a known number: Scammers can "spoof" phone numbers, making it appear as if a call is coming from a child, spouse, or friend when it is not. Even if caller ID shows a familiar name, hang up and call back using a number you know is correct to verify the person's identity.
  • Use established fact-checking sites for viral content: Rather than trying to parse visual clues in videos or images, turn to established fact-checking organizations like Snopes, PolitiFact, and FactCheck.org, which routinely investigate widely shared claims and debunk false content.
  • Test your safeguards regularly: Simple security habits only work if people remember to use them. Periodically test your family code word with loved ones to ensure everyone remembers it when it matters most.

Farid emphasized that the goal is not to outsmart the technology itself, but to change how people respond to it .

"You're not going to detect your way out of this. You have to protect yourself," Farid stated.

Hany Farid, Professor at UC Berkeley

Why Indonesia and Emerging Markets Face Unique Risks?

The threat extends beyond individual consumers to entire financial systems. In Indonesia, where digital banking, e-commerce, and online lending continue to scale rapidly, AI-enabled fraud has become one of the most pressing emerging cybersecurity threats . The challenge is not simply detecting fraud faster, but recognizing that identity itself has become the primary attack surface.

AI-generated scams differ fundamentally from earlier fraud models in three critical ways: they operate at scale with realistic deepfakes and AI-generated documents that defeat basic human review; they automate persuasion by personalizing phishing messages and conversations in local languages; and they unfold across multiple stages, beginning with social engineering and ending with account misuse or financial extraction . For defenders, this means identity checks that rely on static rules or point-in-time verification are increasingly unreliable.

Synthetic identity fraud represents an even more insidious category of risk. Unlike traditional identity theft, synthetic identities combine real personal data with fabricated attributes, making them hard to blacklist and difficult to detect early . These identities are designed for longevity, allowing fraudsters to build trust over time. In Indonesia's rapidly expanding digital finance ecosystem, faster onboarding processes and remote verification make it easier for synthetic identities to mature unnoticed. Losses typically surface only after accounts are used for credit abuse, mule networks, or transaction laundering.

Most Indonesian organizations still encounter structural weaknesses across multiple areas, including over-reliance on document checks and manual review, limited risk linkage between onboarding outcomes and downstream account behavior, continued dependence on SMS and one-time passwords (OTP), and insufficient behavioral or device-based verification . These gaps weaken effective cybersecurity by allowing attackers to exploit organizational blind spots rather than technical flaws.

The path forward requires organizations to combine document verification with liveness detection, device intelligence, and contextual risk scoring; shift toward adaptive authentication based on behavior and risk; and build a shared identity layer accessible to fraud, security, and compliance teams . The priority is to align people, processes, and technology around a shared understanding of identity risk so that defenses remain effective across the full customer lifecycle.

As AI continues to improve at an accelerating pace, the gap between attacker capability and defender readiness will only widen unless individuals and organizations take action now. The good news is that the most effective defenses are often the simplest ones: a family code word, a habit of calling back, and a commitment to verify before trusting.