Why HR Teams Are Now the Front Line Against AI-Powered Cyber Attacks
The UK government has issued a stark warning to business leaders about AI-powered cyber threats, but security experts say the government's technical recommendations miss the real vulnerability: human behavior. On April 15, the security minister and secretary of state for science, innovation and technology sent a joint letter advising businesses to strengthen their defenses against AI-assisted attacks. However, the advice focuses on certifications and technical baselines, leaving a critical gap that experts say lands squarely in HR's domain .
The government's letter follows Anthropic's announcement of its Mythos model, which is capable of committing cyber attacks. While the guidance recommends obtaining government-backed cyber certification and following the National Cyber Security Centre's (NCSC) advice, security leaders argue this approach treats the symptom, not the disease .
What's Really Driving AI-Powered Cyber Risk?
The core problem isn't technical; it's behavioral. Aldis Erglis, chief AI officer at business consultancy Emergn, explained that organizations face three distinct human-centered threats that traditional cybersecurity frameworks don't address: staff feeding sensitive data into unsanctioned AI platforms, payroll teams receiving deepfake requests that look exactly like their boss, and widespread shadow AI usage that nobody admits to .
Conor O'Neill, CEO and co-founder at cybersecurity firm OnSecurity, noted that AI has dramatically lowered the barrier to entry for cybercriminals. "AI has made social engineering attacks frighteningly convincing," he explained. "A spoofed email from a 'colleague' requesting a bank detail change no longer looks like a scam. It looks like a Tuesday morning" .
"AI has made social engineering attacks frighteningly convincing. A spoofed email from a 'colleague' requesting a bank detail change no longer looks like a scam. It looks like a Tuesday morning," said Conor O'Neill.
Conor O'Neill, CEO and co-founder at OnSecurity
This shift means that traditional security awareness training is no longer sufficient. Employees need to understand not just how to spot phishing emails, but how to recognize AI-generated content that mimics trusted colleagues and authority figures with near-perfect accuracy .
How Can Organizations Build AI-Aware Security Culture?
Experts recommend that HR teams take the lead in creating a workplace culture where employees understand AI risks and feel empowered to report suspicious activity. Rather than relying solely on technical controls, organizations need a multi-layered approach that combines education, clear policies, and psychological safety .
- Build AI Literacy Into Daily Work: Introduce continuous, job-embedded learning that teaches employees how AI is being used in their industry, what risks it poses, and how to spot AI-generated deepfakes and spoofed communications. This should be woven into regular workflows, not relegated to annual training sessions.
- Establish Clear Policies on Sanctioned Tools: Define which AI platforms employees are permitted to use and for what purposes. Many organizations suffer breaches because employees use consumer-grade AI tools like ChatGPT to process sensitive company data, unaware of the security implications.
- Create a Culture of Open Reporting: Encourage staff to raise concerns about suspicious activity or unauthorized tool usage without fear of punishment. Shadow AI usage thrives in environments where employees hide their tool adoption from management.
"HR should be at the forefront of educating the workforce as to the nature of these threats, both how to spot them and what to do if people encounter one," said Jordan Burke.
Jordan Burke, co-founder and director at Nine Dots Development
Jordan Burke, co-founder and director at HR training provider Nine Dots Development, emphasized that HR departments must take ownership of this challenge. The traditional separation between IT security and human resources is no longer viable in an AI-driven threat landscape .
How Widespread Is the AI Fraud Problem?
Recent research reveals the scale of the challenge organizations face. A survey of 500 senior professionals across fintech, e-commerce, gaming, banking, and travel sectors found that 97 percent of organizations reported an increase in AI attacks, yet only 36 percent were capable of stopping fraud at any point in the customer journey .
The financial impact is staggering. Organizations reported an average of 4.5 million dollars in annual AI-enabled fraud losses. Additionally, 3.1 million dollars in revenue was reportedly impacted by false positives, where legitimate customers were accidentally blocked by overly aggressive fraud detection tools .
Deepfakes have become a mainstream threat, not an edge case. Ninety-three percent of organizations encountered deepfake-style attempts in the past 12 months, with 45.4 percent experiencing multiple incidents. The top entry points for deepfake attacks were payments and checkout (22 percent), customer support and call centers (16 percent), and onboarding and identity verification (15 percent) .
The challenge extends beyond detection. Fifty-two percent of organizations can explicitly track or label AI-assisted fraud, meaning nearly half are flying blind when it comes to understanding the nature and scope of attacks they're experiencing .
Why Technical Defenses Alone Are Failing?
The research reveals a critical gap between what organizations can detect and what they can defend against. Sixty percent of companies reported losing more than 25 percent of their accounts after suffering a fraud event, suggesting that even organizations with some detection capability struggle to contain damage once an attack succeeds .
The top obstacles to improving defenses are authentication and identity binding (46 percent of organizations cited this as a blocker) and the challenge of distinguishing legitimate automation from malicious AI traffic (40 percent) . These are not purely technical problems; they require human judgment and contextual understanding that automated systems cannot provide.
The government's April letter represents an important acknowledgment that AI-powered cyber threats are a strategic priority. However, experts agree that the real work of defense falls to HR teams, security leaders, and employees themselves. Organizations that treat this as purely an IT problem will continue to lose ground to attackers who exploit the human element of their security posture .