Banks are discovering that AI's greatest value in cybersecurity comes not from letting it make decisions, but from having it do the tedious work humans hate. Security teams at major financial institutions have found that treating artificial intelligence as a read-only assistant that summarizes alerts and stitches together evidence dramatically improves response times, but granting the technology autonomy to act independently introduces severe vulnerabilities that can create new security incidents. What Happens When Banks Give AI Too Much Power? The gap between vendor promises and real-world outcomes became starkly clear when security teams at auto lender Exeter Finance and food manufacturer Tyson Foods presented their findings at the 2026 RSAC Conference. Both organizations built AI assistants to help their security operations centers handle the relentless barrage of alerts and threats, but they quickly learned a critical lesson: the moment they allowed the AI model to execute actions without human oversight, things broke. The core problem stems from how large language models (LLMs), which are AI systems trained on vast amounts of text data to understand and generate human language, interact with untrusted information. These models struggle to reliably distinguish between actual instructions and data embedded within files, logs, or support tickets. This vulnerability is called prompt injection, and it's become a serious concern for financial regulators. If an attacker hides a malicious command inside a system log or support ticket, they can manipulate the AI's output or force it to take unauthorized actions. "If we asked the model to summarize, draft and link evidence, it made analysts faster. By contrast, if the teams asked the agent to make decisions or act on security alerts, it would create new incidents," stated Ankit Gupta, principal security engineer at Exeter Finance, and Shilpi Mittal, lead security engineer at Tyson Foods. Ankit Gupta, Principal Security Engineer at Exeter Finance, and Shilpi Mittal, Lead Security Engineer at Tyson Foods How Can Banks Deploy AI Safely in Security Operations? The two security teams discovered that AI delivers measurable value when confined to specific, bounded tasks. Rather than trying to automate threat response, they focused on using AI to eliminate the repetitive, manual work that slows down human analysts. This approach, sometimes called "evidence stitching," involves pulling together relevant security artifacts like IP addresses, browsing sessions, and unique identifiers from multiple tools and presenting them in a cohesive summary. - Alert Summarization: AI reads through incoming security alerts and generates concise summaries that highlight the most critical information, allowing analysts to quickly understand what they're dealing with without reading lengthy technical logs. - Evidence Stitching: The AI automatically collects and links security artifacts from five to seven different tools, eliminating the tedious manual process of copying and pasting data across systems, which analysts call "swivel-chairing." - Draft Communications: AI generates first-pass reports and documentation that analysts can review and refine, reducing the time spent on administrative tasks and freeing analysts to focus on actual threat analysis. - Query Suggestions: The AI recommends the next best investigative steps based on the evidence it has compiled, helping analysts know where to look next without having to manually plan their investigation. The results from this focused approach were substantial. The Exeter Finance and Tyson Foods teams achieved a 36% reduction in mean time to detect threats and a 22% reduction in mean time to respond. They also saw a 16-point drop in false positives, meaning analysts spent less time chasing alerts that weren't actually threats. Beyond the metrics, analyst sentiment toward the AI tools improved over time, suggesting that the technology was genuinely making their jobs easier rather than creating frustration. What Safeguards Do Banks Need to Implement? Moving from a controlled pilot program to a live production environment exposed the real security risks. Both teams implemented strict guardrails to prevent the AI from causing damage. They operated under a firm "no gate, no action" policy, meaning the AI could never execute any action without explicit human approval. To maintain this safety posture, the security teams continuously ran automated scoring and simulated attacks, a practice known as red-teaming, to test the system against data exfiltration and unsafe tool usage. They treated the AI like production software, utilizing continuous evaluations and version controls to prevent the system from degrading over time. This mirrors how financial regulators and international watchdogs now expect banks to secure their AI deployments against novel attack vectors. The New York State Department of Financial Services warned in October 2024 that attackers can leverage AI to conduct reconnaissance and determine how best to deploy malware. Similarly, the G7 Cyber Expert Group advised in September 2025 that AI introduces new cybersecurity risks, such as attackers using prompt injection to manipulate outputs or retrieve sensitive information. Why Is This Approach Different From Traditional AI Automation? The financial industry has used AI for years to optimize back-office operations and enhance cybersecurity monitoring. Banks actively deploy natural language processing, a subset of AI that helps computers understand human language, to monitor emails and detect phishing attacks. Automating these data-heavy processes allows human investigators to focus efforts on responding to a smaller number of higher-risk activities. What makes the Exeter Finance and Tyson Foods approach distinctive is their explicit rejection of the "autonomous agent" model that many vendors promote. Instead of building AI systems that can independently investigate threats and take action, they built AI systems that augment human decision-making. This distinction matters because it acknowledges a fundamental truth: AI excels at processing large volumes of information and identifying patterns, but it struggles with the judgment calls that require understanding context, organizational priorities, and risk tolerance. For banks facing constant cyberattacks and information overload, this balanced approach offers a practical path forward. Rather than waiting for fully autonomous AI systems that may never be safe enough to deploy at scale, financial institutions can immediately benefit from AI that handles the tedious work while keeping humans in control of the decisions that matter most.