European financial regulators have fundamentally shifted how they supervise algorithmic trading, moving from asking "Do you have controls?" to "Can you prove they work?" In February 2026, the European Securities and Markets Authority (ESMA) published a supervisory briefing that signals a turning point for investment firms using AI and algorithmic systems to execute trades. The guidance emerged after a 2024 investigation into how firms comply with existing regulations, following a 2022 flash crash in Nordic markets. This isn't just another compliance memo. ESMA's briefing explicitly states that the principles of governance, explainability, and control apply equally to AI systems, signaling where regulatory scrutiny is heading as artificial intelligence adoption accelerates across finance. For firms operating in the European Union, the briefing is non-binding but highly consequential; for those in the UK, the Financial Conduct Authority (FCA) is pursuing a parallel supervisory agenda through its own multi-firm review of algorithmic trading controls. What Exactly Counts as Algorithmic Trading Now? ESMA has removed the interpretation flexibility that firms have historically relied on. The regulator now defines algorithmic trading so broadly that any algorithm determining even a single order parameter qualifies. This includes decisions about whether to initiate an order, its timing, price, quantity, or how to manage it after submission. Even when a human intervenes in the trading process, if a computer algorithm determined any individual parameter (other than routing or post-trade processing), the activity is algorithmic. This definitional precision directly affects the scope of controls applied, the completeness of algorithmic inventories, and which systems must undergo testing and monitoring. Firms will need to reassess what they classify as "algorithmic," and the adequacy of their controls may need to expand accordingly. How Are Pre-Trade Controls Being Reimagined as Risk Models? Pre-trade controls (PTCs) are safeguards that prevent erroneous orders and maintain market orderliness. ESMA's message is clear: having controls is necessary but not sufficient. Firms must now demonstrate that their controls are well-calibrated and effective. This represents a fundamental shift in how regulators view these systems. The real change is that PTCs are no longer solely operational safeguards; they are becoming risk models in their own right. Firms should apply the same rigor to PTCs as they apply to market risk models. This means moving from static, point-in-time calibration to dynamic, evidence-based frameworks that include documented methodologies for threshold setting, regular back-testing against real trading data, clear escalation processes when limits are breached, and formal governance sign-off on calibration decisions. Steps to Strengthen Your Algorithmic Trading Compliance Framework - Reassess Algorithm Definitions: Ensure the scope of what your firm classifies as "algorithmic" reflects ESMA's interpretation, including activities that determine any single order parameter, and expand control scope accordingly. - Strengthen Algorithm Inventories: Ensure each algorithmic trading strategy is clearly defined, testable, distinguishable from other strategies, and linked to market abuse surveillance systems for proper monitoring. - Revalidate PTC Calibration: Treat pre-trade controls as dynamic risk models requiring quantitative evidence of calibrations, ongoing back-testing against real trading data, and formal governance sign-off on all calibration decisions. - Document Outsourcing Accountability: Maintain full responsibility for third-party algorithms, vendor platforms, and Direct Electronic Access arrangements, with clear contractual terms and oversight mechanisms in place. - Implement AI Model Oversight: Apply governance, explainability, and control principles specifically to AI systems, ensuring they can be monitored, tested, and validated before and after deployment. ESMA has identified seven key areas under increased supervisory focus, with particular emphasis on how firms define algorithmic trading strategies. A strategy is now defined as a set of decision logic, implemented through one or more algorithms, that autonomously pursues a defined trading objective such as market making, arbitrage, or execution optimization. Each strategy must be testable, distinguishable from other strategies, subject to separate supervisory scrutiny, linked to observable trading behavior, and documented in a way that allows pre-deployment and post-change review. The granularity of strategy classification determines the scope of testing obligations. Firms that bundle multiple strategies into a single testing exercise would be expected by supervisors to demonstrate that each strategy has been individually validated. Strategies must be sufficiently distinguishable that surveillance systems can isolate their behavior and detect potential abusive patterns. Why Does Outsourcing Responsibility Matter More Than Ever? One of the most direct reminders in ESMA's briefing is also one of the most important: outsourcing activities does not mean outsourcing responsibility. Whether firms rely on third-party algorithms, vendor platforms, or Direct Electronic Access (DEA) arrangements, they remain fully accountable for ensuring those systems comply with regulatory requirements. This principle extends to AI systems. As artificial intelligence becomes more prevalent in trading operations, firms cannot delegate accountability to algorithm developers or AI vendors. Senior management must maintain oversight, ensure controls are effective, and demonstrate that outsourced systems meet the same standards as internally developed ones. The supervisory shift reflects a broader recognition that algorithmic trading and AI-driven finance carry systemic risks. The 2022 Nordic flash crash demonstrated how quickly market disruptions can occur when controls fail. By requiring firms to prove their controls work in live trading environments, ESMA is attempting to prevent similar incidents and protect market stability. For investment firms across the EU, the message is clear: compliance is no longer about checking boxes. It's about demonstrating, with quantitative evidence and formal governance processes, that your algorithmic and AI trading systems are resilient, fit for purpose, and capable of operating safely in real markets. The firms that adapt quickly will be better positioned to navigate the regulatory landscape; those that delay risk increased supervisory scrutiny and potential enforcement action.