Europe's industrial base is facing a compliance crisis that could undermine the continent's ability to compete globally. A coalition of machinery manufacturers, medical device makers, and industrial companies is sounding the alarm about the EU AI Act, arguing that overlapping regulations are creating impossible cost burdens for businesses already operating under strict sectoral safety rules. The concern centers on a fundamental design flaw: companies regulated under existing product safety frameworks are being forced to comply with the AI Act simultaneously, creating what industry groups call a \"double or even triple layer of regulation\". \n\nThe financial stakes are staggering. According to the Commission's own analysis, a small or medium-sized enterprise (SME) developing a high-risk AI system could face up to €319,000 in initial compliance costs, plus up to €150,000 per year in ongoing expenses. When certification and staff costs are factored in, initial compliance expenses balloon to €600,000, translating into 30 to 40 percent profit erosion for SMEs. For companies already investing heavily in digital transformation, these additional burdens threaten their ability to innovate and deploy new products. \n\nWhy Are Manufacturers Caught in Regulatory Crossfire? \n\nThe problem isn't that Europe is regulating AI; it's that the AI Act doesn't account for sectors that already have robust oversight mechanisms in place. Machinery manufacturers integrating AI-based safety functions into industrial equipment, medical technology companies developing AI-enabled devices, and producers of radio equipment and other connected industrial products increasingly rely on AI components. These companies must now demonstrate compliance under both their existing product safety rules and the new AI Act requirements. \n\nThis creates a cascade of practical problems. Overlapping and potentially conflicting documentation and conformity assessments risk delaying certification and slowing the deployment of innovative products. A medical device company, for example, might need to satisfy both the Medical Device Regulation and the AI Act's high-risk classification requirements, even though both frameworks are designed to protect patient safety. The duplication doesn't add protection; it adds bureaucracy. \n\nHow Can Europe Fix This Without Weakening AI Safeguards? \n\nIndustry groups are proposing a targeted solution: move sectors already governed by existing product safety frameworks from Section A to Section B of Annex I of the AI Act, so that AI-related requirements can be addressed through the appropriate sectoral regulatory frameworks and authorities. This wouldn't eliminate AI oversight; it would consolidate it under the regulators who already understand these industries. \n\n \n - Machinery and Industrial Equipment: AI safety functions in industrial machinery should be governed by existing machinery safety directives rather than duplicated under the AI Act's high-risk classification. \n - Medical Devices and Healthcare Technology: AI-enabled medical devices already face rigorous clinical validation and safety requirements; these should serve as the primary compliance framework for AI components. \n - Radio Equipment and Connected Products: Connected industrial products regulated under radio equipment directives should consolidate AI oversight under existing sectoral authorities rather than triggering separate AI Act compliance. \n - Energy and Automotive Sectors: Industries with established safety and performance standards should leverage those frameworks to govern AI integration rather than creating parallel compliance pathways. \n \n\nThe timing of this push matters. Europe is facing a competitiveness crisis. The Draghi report estimates that the EU needs €750 to €800 billion in additional annual investment per year to remain globally competitive. At the same time, regulatory compliance costs have grown substantially, with estimates suggesting they may now reach around €500 billion per year across the EU economy. Adding €600,000 in AI compliance costs to SMEs isn't just a burden; it's a strategic mistake when Europe needs to accelerate innovation, not slow it down. \n\nWhat's the Broader Regulatory Alignment Problem? \n\nThe AI Act doesn't exist in isolation. It interacts directly with the General Data Protection Regulation (GDPR), the Data Act, and cybersecurity legislation. Industry groups are calling for better alignment between the GDPR and the AI Act, particularly regarding how data can be used for AI development and deployment. The timelines of the AI omnibus and the digital omnibus should also be better aligned, since the content of these frameworks is closely interconnected. \n\nData protection authorities are also weighing in on the broader regulatory ecosystem. The European Data Protection Board (EDPB) and European Data Protection Supervisor (EDPS) have emphasized that "the relationship between data protection and cybersecurity is reciprocal and deeply interconnected". While cybersecurity supports the protection of personal data by limiting the risks of unwanted access, modification, or unavailability of data, it is crucial to ensure that security controls are implemented in a way that does not undermine individuals' fundamental rights and freedoms. This principle should extend to how the AI Act interacts with data protection frameworks. \n\nThe stakes extend beyond individual companies. Europe's digital sovereignty depends on maintaining a competitive industrial base capable of developing and deploying AI-powered products. If compliance costs force SMEs to abandon AI innovation or relocate to less regulated jurisdictions, Europe loses both talent and market share. The window to address this is closing; industry groups are calling on the European Commission to table a targeted, standalone proposal to postpone the upcoming AI Act application deadlines, similar to what was done for the sustainability omnibus. \n\nThe broader lesson is that regulation and innovation aren't inherently at odds, but poorly designed regulation can pit them against each other. Europe's challenge is to maintain robust AI safeguards while removing unnecessary barriers that duplicate existing oversight. That balance is achievable, but only if policymakers act quickly to streamline the regulatory framework before compliance costs become prohibitive. "\n}