Most European countries are spreading AI oversight across a patchwork of existing regulators, but Poland is taking a radically different approach by building an entirely new, centralized authority. As the EU's Artificial Intelligence Act (AI Act) entered force in August 2024, the bloc's 27 member states were left to design their own enforcement structures. The choices they are making reveal a fundamental tension: how to build robust AI supervision when budgets are tight and AI specialists are scarce. Poland stands out as one of only two EU countries, alongside Lithuania, to designate a single entity as its sole market surveillance authority for AI. More remarkably, Poland is the only nation building an entirely new institution to serve this role. The Commission for the Development and Safety of Artificial Intelligence, known by its Polish acronym KRiBSI, represents a deliberate gamble on centralization over fragmentation. Why Are Most EU Countries Spreading Oversight Across Multiple Agencies? The overwhelming pattern across Europe is dispersion. As of early 2026, only nine of the 27 member states have officially designated their national AI authorities, with a further ten in the process of doing so. Among the 19 countries that have designated or proposed their governance models, the vast majority are parceling out market surveillance duties among existing regulators. France, for example, plans to involve 14 separate bodies, including data protection offices, financial supervisors, telecoms regulators, health agencies, and others. This fragmented approach reflects a practical reality: most governments lack the budget and political will to create new institutions. Spreading oversight across existing agencies feels like a lower-cost solution, even if it creates coordination headaches. But Poland's government rejected this logic, arguing that the dispersed model would waste resources and trigger harmful competition between agencies for the same scarce pool of AI experts. How Is Poland Structuring Its New AI Regulator to Handle Sector-Specific Complexity? Poland's approach to centralization includes several innovative safeguards designed to address the obvious risk: that a single horizontal authority might lack the deep sector-specific knowledge needed to oversee AI in healthcare, finance, education, and law enforcement. To compensate, the Polish legislator gave KRiBSI a collegiate structure with representatives from the competition authority, the financial supervisor, the broadcasting council, and the telecoms regulator embedded directly in its decision-making body. Beyond its governance structure, Poland's draft legislation introduces two instruments that address challenges shared across the EU: - Individual Opinions Mechanism: Companies can formally request binding opinions from KRiBSI on how the regulation applies to their specific product or service, offering upfront legal certainty that businesses across Europe are demanding. - Social Council for AI: An advisory body of 9 to 15 members drawn from academia, civil society, business chambers, and trade unions, all required to have expertise in AI, cybersecurity, or human rights, designed to bridge the public sector's expertise gap. - Regulatory Sandboxes: Companies can first seek a binding opinion to clarify whether their system counts as high-risk, then use the sandbox to test compliance in a controlled environment. The Social Council members serve deliberately short two-year terms to keep pace with rapid technological change. While the Council's opinions are not binding and positions are unpaid, which may limit influence, the structure provides a foundation that can be strengthened as the enforcement regime matures. What's the Catch? The Independence Problem That Could Undermine Poland's Model Poland's experiment faces a critical vulnerability. Following budgetary pressure, the February 2026 draft confirms KRiBSI as the designated authority for AI Act enforcement, but nests its operational support within the Ministry of Digital Affairs. The staff, budget, and infrastructure on which the Commission depends are administered by the very ministry whose policy portfolio it is supposed to independently oversee. Under earlier drafts, KRiBSI was to be supported by a standalone Bureau with its own legal personality, a structure that would have provided genuine institutional independence. That design was abandoned following fiscal objections from the Ministry of Finance. This creates a troubling paradox: statutory independence guarantees on paper cannot fully compensate for administrative dependence in practice. The tension Poland faces is unlikely to be unique. Every member state building its AI enforcement machinery confronts the same uncomfortable questions: How much is robust, independent oversight actually worth? Can statutory independence guarantees compensate for administrative dependence? And when specialists are scarce, is it better to concentrate them in one place or embed them across many? There are no clean answers. What Does Poland's Gamble Mean for the Rest of Europe? Poland's centralized model offers clear advantages: a single point of contact for businesses, easier EU-level coordination through the European AI Board, and avoidance of the fragmentation that can slow enforcement when responsibilities are scattered across a dozen agencies. But it places an extraordinary burden on a single body to develop sufficient understanding of every sector in which high-risk AI systems are deployed, from healthcare and finance to education and law enforcement. With more than two-thirds of EU member states yet to finalize their own governance models, Poland's early and distinctive bet to centralize, build new, and supplement with advisory innovation deserves close attention as a live experiment in what it takes to turn AI regulation from paper into practice. The draft law has yet to reach Parliament, and amendments could still alter the design. But the outcome will only become clear once the enforcement deadlines arrive and regulators must actually investigate violations and impose penalties. For companies operating across Europe, the divergence in national enforcement structures creates both opportunity and risk. Those seeking legal certainty can pursue Poland's individual opinions mechanism. Those operating in fragmented markets like France must navigate coordination across multiple agencies. As the AI Act matures, the success or failure of Poland's centralized model may influence how other member states reshape their own enforcement architectures in the years ahead.