AI Agents Need Guardrails: How JFrog Is Building a Security Layer for Enterprise AI
As AI agents become more autonomous and integrated into enterprise software development, organizations face a critical security challenge: how to control what these agents can access and execute. JFrog, a software supply chain platform developer, has unveiled two new registry innovations designed to govern AI and agentic AI systems, addressing vulnerabilities that could expose companies to prompt hijacking, credential theft, and unauthorized code execution .
What Are MCP Servers and Why Do They Matter for AI Security?
Model Context Protocol (MCP) servers have become essential infrastructure for AI agents, acting as trusted intermediaries that give AI models access to internal and external enterprise systems, application programming interfaces (APIs), and data. However, this power comes with significant risk. Because developers often use MCP servers from multiple AI toolkits and vendors, organizations struggle to monitor these AI connections and enforce consistent security policies .
JFrog's new MCP Registry addresses this visibility gap by creating a centralized system of record for MCP server usage. The registry treats MCP servers like any other software asset, enabling organizations to quickly block insecure developer tools and enforce governance policies across all AI workflows. Without this oversight, MCP servers can execute arbitrary code directly on user machines or remote systems with high privileges, creating exposure to severe risks including prompt hijacking vulnerabilities, over-privileged access, and credential exposure .
How Can Enterprises Secure Their AI Agent Deployments?
- Centralized Discovery and Configuration: Treat every MCP server as a governed artifact with centralized discovery, configuration, and project-level permissions management to maintain visibility across all AI connections.
- Skill Verification and Scanning: Use agent skills registries to store, scan, and govern all agentic binary assets, ensuring that only vetted skills guide AI agents toward safe, authorized actions.
- Policy Enforcement on Every Workflow: Establish mandatory governance policies that apply to every AI or agentic workflow, preventing unvetted tools and uncontrolled agent behavior from introducing enterprise risk.
JFrog's second innovation, the Agent Skills Registry, extends this governance framework specifically for AI agents. Developed in partnership with Nvidia's Enterprise AI Factory, this registry establishes a system of record for agent skills, models, and software packages. The partnership integrates with Nvidia Agent Toolkit, which includes NemoClaw, an open-source runtime for building AI agents .
"A malicious software package can compromise an application. An unvetted skill can guide an agent to perform harmful actions," stated Gal Marder, Chief Strategy Officer at JFrog.
Gal Marder, Chief Strategy Officer at JFrog
The Agent Skills Registry will serve as a registry for AI models and agent skills within Nvidia AI-Q Blueprint, part of the Nvidia Agent Toolkit. By establishing JFrog Platform as an integrated, secure registry for Nvidia AI-Q Blueprint and Nvidia NemoClaw runtime, enterprises can safely operate agents using verified skills, MCP servers, models, and software packages .
Why Is This Timing Critical for Enterprise AI Adoption?
Software supply chains are increasingly AI-driven, but the governance frameworks haven't kept pace with deployment speed. Yuval Fernbach, Chief Technology Officer for JFrog MLOps, emphasized that innovation cannot come at the expense of security, visibility, control, or compliance .
"This innovation cannot come at the expense of security, visibility, control, or compliance. By establishing a system of record for MCP server usage, and treating them like any other binary asset, organisations can confidently innovate at scale while maintaining trust and control," explained Yuval Fernbach.
Yuval Fernbach, Chief Technology Officer for JFrog MLOps
The challenge is particularly acute because AI agents operate with increasing autonomy. Unlike traditional software tools that require explicit user commands, agents can make decisions, access systems, and execute code based on their training and the instructions they receive. This autonomy is powerful for productivity but dangerous without proper oversight. A single unvetted skill or compromised MCP server could allow an agent to perform harmful actions across an entire organization's infrastructure .
JFrog's approach treats AI governance as a supply chain problem, not just a security problem. Just as organizations scan software dependencies for vulnerabilities, they now need to scan AI components, agent skills, and MCP server connections. This shift reflects a broader recognition that as AI becomes embedded in enterprise workflows, the tools and frameworks that manage traditional software must evolve to manage AI systems as well.
For development teams and security leaders, these registries represent a practical step toward responsible AI deployment. Rather than choosing between innovation speed and security, organizations can now establish governance frameworks that allow teams to confidently use AI agents while maintaining the visibility and control necessary for enterprise operations.