The Week AI Infrastructure Broke: How Three Major Frameworks Became Attack Targets in 72 Hours

Three critical vulnerabilities across foundational AI frameworks exposed millions of enterprise deployments to attackers in a single week, revealing systemic weaknesses in how organizations build AI applications. Between March 17 and March 24, 2026, LangChain, Langflow, and LiteLLM each suffered high-severity breaches that demonstrate how fragile the trust model in open-source AI infrastructure has become .

What Happened to LangChain and LangGraph?

On March 27, security researchers at Cyera disclosed three separate vulnerabilities in LangChain and LangGraph, the most widely used open-source frameworks for building large language model (LLM) powered applications. Combined, these frameworks account for more than 60 million PyPI downloads per week, making them central to how enterprises connect AI models to their data .

The three flaws each exposed different types of enterprise data:

  • Path Traversal Flaw (CVE-2026-34070): A vulnerability in LangChain's prompt-loading module allowed attackers to access arbitrary files on a system without validation, including Docker configurations, application secrets, and internal documentation.
  • Serialization Injection (CVE-2025-68664): Tracked as "LangGrinch," this flaw in langchain-core's serialization functions let attackers extract secrets from environment variables and potentially achieve arbitrary code execution, with secret loading from the environment enabled by default.
  • SQL Injection (CVE-2025-67644): A vulnerability in LangGraph's SQLite checkpoint implementation allowed attackers to manipulate SQL queries and access conversation histories associated with sensitive workflows.

Organizations using these frameworks needed to update immediately to patched versions to prevent exploitation .

Why Did Langflow Get Compromised So Quickly?

The Langflow incident stands out not for technical complexity but for speed. On March 17, a critical remote code execution vulnerability was disclosed in Langflow, the visual framework for building AI agent workflows with over 145,000 GitHub stars. Tracked as CVE-2026-33017 with a severity score of 9.3 out of 10, the flaw allowed unauthenticated attackers to execute arbitrary Python code through a single HTTP request .

The vulnerability worked because an endpoint designed to let unauthenticated users build public workflows accepted attacker-controlled flow data containing arbitrary Python code. That code was passed directly to the exec() function with zero sandboxing. Sysdig's Threat Research Team observed the first exploitation attempts within 20 hours of the advisory's publication. Within 21 hours, Python-based exploitation scripts appeared in the wild. Within 24 hours, attackers were harvesting .env and .db files from compromised servers .

"The researcher who discovered CVE-2026-33017 found it while examining how Langflow maintainers fixed the previous vulnerability. The same vulnerability class, in a different endpoint, exploiting the same dangerous function," noted the security analysis.

Security Research Community, Vulnerability Analysis

This pattern revealed systemic architectural weakness rather than a one-off bug. The U.S. Cybersecurity and Infrastructure Security Agency (CISA) added the vulnerability to its Known Exploited Vulnerabilities catalog on March 25, giving federal agencies until April 8 to patch or stop using the product. This was the second critical Langflow remote code execution flaw in the CISA catalog .

How Did LiteLLM Get Attacked Through a Security Scanner?

The LiteLLM compromise represents the most sophisticated attack of the three incidents and demonstrates how AI infrastructure's position in the software supply chain creates enormous risk from a single point of failure. LiteLLM is a unified API wrapper downloaded approximately 3.4 million times per day, making it a high-value target .

On March 24, two malicious versions of the LiteLLM Python package appeared on PyPI. The attack was orchestrated by a threat group known as TeamPCP and reveals just how fragile the trust model in open-source AI infrastructure has become. The execution chain worked like this:

  • Initial Compromise: On March 19, TeamPCP rewrote Git tags in the trivy-action GitHub Action repository to point to a malicious release carrying credential-harvesting payloads.
  • Expanding the Attack: On March 23, the same infrastructure was used to attack Checkmarx KICS, another security tool, demonstrating a coordinated campaign.
  • Reaching the Target: On March 24, the chain reached LiteLLM, which ran the compromised Trivy security scanner as part of its build process without pinning a specific version.

The compromised Trivy action exfiltrated the PyPI publishing token from the GitHub Actions runner environment. With that credential, TeamPCP published malicious LiteLLM versions directly to PyPI, bypassing the normal release process entirely. No corresponding tag or release existed on the LiteLLM GitHub repository, making the attack invisible to anyone reviewing the source code .

The malicious packages contained a .pth file, a little-known Python mechanism that auto-executes code every time the Python interpreter starts. No explicit import required. The payload was designed to harvest AWS, GCP, and Azure tokens, SSH keys, and Kubernetes configurations .

How to Protect Your AI Infrastructure From These Attacks

  • Update Immediately: Organizations using LangChain, LangGraph, Langflow, or LiteLLM should upgrade to patched versions without delay. For LangChain-core, update to version 0.3.81 or 1.2.5 and above. For LangGraph, update to langgraph-checkpoint-sqlite version 3.0.1. For Langflow, upgrade to version 1.9.0 or later.
  • Audit Untrusted Data Flows: Review any workflows that pass untrusted data through LangChain's serialization layer, as this is where the "LangGrinch" vulnerability exploits occur. Disable automatic secret loading from the environment if not required.
  • Restrict Network Exposure: Do not expose Langflow instances directly to the internet. If upgrading is not immediately possible, disable or restrict the vulnerable endpoint to prevent remote code execution attacks.
  • Rotate Credentials: If suspicious activity is detected, rotate all API keys, database credentials, and cloud secrets immediately. Langflow instances typically have access to OpenAI, Anthropic, AWS, and database connections, making credential compromise particularly dangerous.
  • Pin Dependencies: In CI/CD pipelines, pin specific versions of security scanners and other tools rather than pulling the latest version without verification. This prevents supply chain attacks like the one that compromised LiteLLM.

The week of March 24, 2026 marked a turning point for AI infrastructure security. These were not isolated incidents but rather a systemic pattern revealing that the frameworks connecting large language models to enterprise data are riddled with the oldest vulnerability classes in the book, and attackers know it. Organizations building with AI must treat these foundational frameworks with the same security rigor they apply to their core applications .