A Dutch court has ordered xAI to disable Grok's ability to digitally strip people in images without consent, imposing fines up to €10 million for violations. The Amsterdam ruling marks the first major legal action against an AI chatbot's image manipulation capabilities and signals how regulators worldwide are beginning to crack down on generative AI tools that enable abuse. What Exactly Is the Grok Nudifier, and Why Did It Become a Problem? Grok, the AI chatbot developed by xAI and owned by Elon Musk, includes a feature that can digitally remove clothing from images of people without their permission. When the tool launched at the beginning of 2026, the consequences were swift and alarming. According to the American Centre for Countering Digital Hate, Grok generated approximately 3 million sexualized images in just the first 11 days after launch, including roughly 23,000 images depicting children. The scale of the problem became impossible to ignore. Online abuse expertise centre Offlimits, a Dutch organization focused on combating digital harm, filed a lawsuit against X and Grok last month. What made their case particularly compelling was that they were able to demonstrate the nudifier function still worked by creating a video showing people rendered digitally naked shortly before the trial began, suggesting xAI's earlier claims about fixing the issue were insufficient. How Did the Court Respond, and What Are the Real Consequences? The Amsterdam court issued a sweeping ban on using Grok's nudifier function within the Netherlands. The ruling includes financial penalties of €100,000 for each violation, with a maximum total fine of €10 million. More dramatically, the judge ruled that as long as the stripping function remains active, Grok can no longer operate as part of the X platform, though the practical implementation of this requirement remains unclear. xAI has been ordered to explain to the court how it plans to comply with the ruling. The company now faces a critical decision: either disable the nudifier globally or find a way to geographically restrict the feature to comply with Dutch law while maintaining it elsewhere. Why This Dutch Ban Could Have Global Implications? The scope of the ruling extends beyond what might initially appear. The ban technically covers only images of people who live in the Netherlands and which have been distributed there. However, Robbert Hoving, director of Offlimits, emphasized the broader significance of the decision. "Grok has no way of checking whether a Dutch person appears in a photo. So this ban could easily have global implications," stated Robbert Hoving, director of Offlimits. Robbert Hoving, Director at Offlimits This observation highlights a fundamental technical challenge: AI systems cannot reliably determine the nationality or residence of people in images. As a result, xAI may find it nearly impossible to comply with the Dutch ruling without disabling the nudifier function entirely across all markets. The verdict, according to Hoving, is "groundbreaking" precisely because it forces this uncomfortable reality into the open. Steps xAI Must Take to Comply With the Ruling - Technical Remediation: Develop or implement safeguards that prevent the nudifier function from generating sexualized images of real people, particularly minors, or disable the feature entirely across all jurisdictions. - Legal Compliance: Submit a detailed plan to the Amsterdam court explaining how the company will implement the ban and prevent future violations within the Netherlands. - Platform Integration: Determine whether Grok can remain integrated with X or if it must be separated as a standalone service to satisfy the court's requirement that the nudifier cannot operate as part of X. - Monitoring and Enforcement: Establish systems to detect and prevent misuse of any remaining image manipulation features, with particular attention to child safety. What Does This Mean for the Broader AI Industry? The Dutch ruling arrives at a critical moment for AI regulation. On the same week the court issued its decision, the European Parliament agreed on measures to ban all nudifier apps across the European Union. Legislation is expected to take effect in all member states before the summer of 2026. This convergence of judicial action and legislative momentum suggests that image-based sexual abuse tools will face coordinated pressure across Europe. For xAI and other AI companies, the message is clear: features that enable non-consensual intimate imagery are becoming legally untenable in major markets. The question now is whether xAI will voluntarily disable the nudifier globally or wait for additional legal challenges in other jurisdictions. The Amsterdam court's decision also raises important questions about AI accountability more broadly. If a company cannot technically comply with a geographic restriction on a feature, should it be forced to disable that feature everywhere? The answer emerging from regulators appears to be yes, particularly when the feature facilitates harm to vulnerable populations like children.