European regulators have launched a new investigation into Elon Musk’s X, focusing on alleged failures to control sexually explicit imagery generated by the company’s AI chatbot, Grok.
The case is being pursued under the European Union’s Digital Services Act (DSA), a law that grants the European Commission expansive powers to police digital platforms for potential “harms.”
In a statement, the Commission said, “The new investigation will assess whether the company properly assessed and mitigated risks associated with the deployment of Grok’s functionalities into X in the EU.”
The agency added that the review includes “risks related to the dissemination of illegal content in the EU, such as manipulated sexually explicit images, including content that may amount to child sexual abuse material.” Officials stated that these threats “seem to have materialized, exposing citizens in the EU to serious harm.”
More: Democrats Demand Apple and Google Ban X From App Stores
The Commission’s decision to open a full probe follows reports that Grok could be used to create or edit sexualized images.
Musk’s company said it had “implemented technological measures” to prevent the Grok account on X “from allowing the editing of images of real people in revealing clothing such as bikinis.”
It also limited Grok’s image-editing functions on X to paid subscribers, while the separate Grok application, operating outside the public platform, still allows non-paying users to generate AI imagery.
What stands out about this investigation is not the issue itself, but who is being singled out. The ability to produce manipulated sexual imagery is not unique to Grok. Similar behavior has been observed in systems developed by other AI providers that can be prompted, sometimes indirectly, to produce sexualized or non-consensual images. Yet, none of those companies has been subjected to a comparable EU enforcement action or public investigation under the DSA.
This selective scrutiny has prompted questions about whether the European Commission is applying its new powers evenly or whether X has become a political test case.
Musk’s open criticism of EU content censorship laws and his stated opposition to government-imposed restrictions on speech have frequently drawn public friction with European officials.
The timing of multiple investigations into X, while other platforms with similar technical capabilities face limited oversight, suggests that the Commission may be using the DSA as leverage against one of the few major platforms challenging regulatory orthodoxy.
US Under Secretary of State for Public Diplomacy Sarah Rogers previously said, “Deepfakes are a troubling, frontier issue that call for tailored, thoughtful responses. Erecting a ‘Great Firewall’ to ban X, or lobotomizing AI, is neither tailored nor thoughtful. We stand ready to work with the EU on better ideas.” Her statement shows growing unease about government responses that risk suppressing lawful expression in pursuit of technological control.
The EU’s pressure on X adds to the company’s existing legal troubles. In December 2025, the Commission fined X 120 million euros ($142.3 million) for violations tied to its content recommendation systems, also under the DSA.
The earlier probe began in 2023 and focused on how X amplifies information across its network.
The DSA’s broad design gives regulators discretion to define what constitutes “systemic risk,” a term that now encompasses generative AI models.
While the regulation’s stated purpose is to safeguard users, its implementation has raised concerns about arbitrary enforcement and the potential for political targeting.








