The biggest AI security threat isn't deepfakes or chatbot jailbreaksāit's the infrastructure that hosts machine learning models itself. Security researchers at Praetorian recently demonstrated how an attacker with nothing more than a self-registered trial account could deploy malicious code, gain remote access to a cloud provider's internal systems, and establish command-and-control infrastructure that survives even after account deletion. Why Machine Learning Platforms Are Unexpectedly Vulnerable? Most cybersecurity discussions focus on Large Language Model (LLM) vulnerabilitiesāprompt injections, jailbreaks, and system prompt disclosures that trick AI chatbots into revealing secrets. But while security teams worry about chatbots leaking information, attackers are targeting something far more dangerous: the platforms that train and deploy AI models at scale. Modern MLOps (Machine Learning Operations) platforms prioritize speed and ease of use. Developers can self-register, receive isolated cloud environments, and move from idea to production in hours rather than weeks. The problem? These platforms must execute arbitrary code to functionāa fundamental requirement that makes them inherently harder to sandbox than traditional web applications. How the Attack Actually Works During a red team engagement, Praetorian's security team discovered a critical vulnerability in how these platforms handle model deployment. The researchers created what appeared to be a legitimate machine learning model but was actually a malicious payload designed to evade detection. The model accepted custom parameters in API requests, including a URL pointing to malicious code. When the platform processed these requests, it would retrieve and execute the code within the container where the model was deployed. From the platform's perspective, this looked like normal model behaviorāprocessing input and generating output. In reality, the researchers had created remote code execution capability in the provider's infrastructure using only a free trial account that took minutes to create. The Cascade of Damage: Network Isolation Failures Deploying malicious code was concerning, but the real danger emerged next. The containers hosting deployed AI models weren't properly isolated from the provider's internal resources. The researchers' command-and-control beacon could reach internal services, databases, and infrastructure that should have been completely inaccessible to external users. The trust boundary between customer infrastructure and internal corporate resources was either poorly implemented or non-existent. This level of access creates multiple attack vectors that could allow an attacker to: - Exfiltrate Sensitive Data: Access internal databases, APIs, and services that trusted the ML platform's network space, stealing proprietary information or customer data. - Establish Persistent Access: Deploy additional backdoors to underlying cloud infrastructure before the compromised account is terminated, ensuring access survives account deletion. - Pivot Deeper Into Networks: Use the compromised container as a trusted insider to discover and attack additional high-value targets within the organization's infrastructure. Steps to Secure Machine Learning Platforms Organizations deploying or using MLOps platforms should implement these critical security measures: - Network Segmentation: Enforce strict network isolation between customer-deployed models and internal corporate resources, ensuring containers cannot reach internal services, databases, or APIs. - Code Execution Sandboxing: Implement robust sandboxing mechanisms that limit what code can access and execute, preventing arbitrary code execution from reaching sensitive infrastructure. - Account Provisioning Controls: Require verification and approval for trial accounts rather than allowing immediate self-registration with full deployment capabilities. - Continuous Monitoring: Monitor all model deployments and API calls for suspicious patterns, including unusual network connections or data access requests. - Infrastructure Hardening: Regularly audit cloud infrastructure configurations, apply principle of least privilege to all service accounts, and implement immutable audit logging. What's Being Done at the Policy Level? Lawmakers are beginning to recognize that AI security extends beyond consumer-facing threats. Congress has introduced multiple bills addressing AI-related cybersecurity risks. The AI Fraud Accountability Act, sponsored by Representatives Vern Buchanan and Darren Soto and Senators Tim Sheehy and Lisa Blunt Rochester, would create new criminal penalties for individuals who use AI-generated audio or visual content to impersonate others for fraud. Companies including Microsoft and AARP have already voiced support for the proposal. Additionally, the AI-Ready Networks Act, introduced by Representatives Jay Obernolte and Jennifer McClellan, would direct the National Telecommunications and Information Administration to produce a report on integrating AI systems into U.S. telecommunications networks, including analysis of security best practices and recommendations for updating the Communications Act of 1934. The Broader Threat Landscape While infrastructure vulnerabilities represent a critical blind spot, AI-powered impersonation scams continue to evolve. According to the Federal Trade Commission, government imposter scams generate hundreds of thousands of complaints each year, and cybersecurity research shows increasing concern about AI-powered scams including deepfake voice and video impersonations. Scammers now use AI to clone voices from short audio samples, generate natural-sounding voicemail messages, create polished emails without grammar mistakes, and personalize messages using stolen or publicly available data. The Social Security Administration Office of the Inspector General designated March 5, 2026, as the 7th Annual National Slam the Scam Day to raise awareness of these evolving threats during National Consumer Protection Week. Why This Matters for Your Organization The Praetorian research reveals a critical gap in how organizations think about AI security. While companies invest heavily in protecting against prompt injection attacks and consumer-facing AI fraud, they often overlook the infrastructure that powers their AI systems. A single misconfigured MLOps platform could provide attackers with a foothold into your entire networkānot through your defenses, but through the development tools your teams trust. As AI adoption accelerates across industries, the attack surface expands. Organizations must expand their security thinking beyond chatbot vulnerabilities to include the platforms, infrastructure, and networks that enable AI development and deployment. The researchers' demonstration shows that this isn't a theoretical riskāit's a practical vulnerability that exists in production systems today.