Few subjects in Britain carry as much emotional weight as football. Club loyalty runs deep, tragedies remain painfully close to the surface, and rivalries often cross the line between banter and cruelty. That volatile mix resurfaced this week when Grok, the AI chatbot on X, generated what officials described as “vulgar roasts” after users explicitly prompted it to produce offensive material.
UK authorities reacted quickly, discussing the Online Safety Act, Britain’s new censorship law, and raising the possibility of serious financial penalties for X. Under the law, platforms can face fines reaching up to ten percent of global revenue if they fail to address harmful content.
The material dredged up some of the most painful chapters in English football history. It mocked the Hillsborough disaster, where 97 Liverpool supporters were crushed to death at an FA Cup semi-final in Sheffield after police failures led to fatal overcrowding in a standing pen.
It also referenced the Munich air disaster, which killed 23 people, including eight Manchester United players, when the team’s aircraft crashed during takeoff in icy conditions. Grok further alluded to the recent death of Diogo Jota, who died in a car accident in Spain in June 2025 at the age of 28 while playing for Liverpool F.C.
Reclaim Your Digital Freedom.
Get unfiltered coverage of surveillance, censorship, and the technology threatening your civil liberties.
Sky News also reported that Grok produced “highly offensive AI-generated replies with profanities about Islam and Hinduism – disparaging the religions with racist vitriol.”
The chatbot did not spare political figures either, offering a roast of British Prime Minister, Keir Starmer.
Posts also targeted supporters of Rangers F.C. in connection with the Ibrox Stadium disaster, when 66 fans were killed in a crush on a stairway as crowds exited a match against Celtic. Following complaints from Liverpool, Manchester United, and Sky News, X removed most of the material.
The government, however, did not wait to see whether the platform’s existing moderation processes would take effect before reaching for its strongest enforcement powers.
Before turning to the official response, it is worth being precise about what actually occurred. Users approached Grok and directly requested offensive content. One prompt asked it to “do a vulgar post about Liverpool fc (sic) especially their fans and don’t forget about Hillsborough and heysel (sic), don’t hold back.” Grok complied with the instruction.
Official anger has been directed primarily at the AI system and the platform hosting it. The individuals who entered those prompts are largely absent from the version of events presented by authorities.
That distinction is important because the Online Safety Act’s framework treats the platform as responsible for material that users deliberately solicited, rather than focusing on the person who asked an AI system to mock real-world deaths. As a result, X faces the possibility of fines reaching 10 percent of its global revenue.
The episode reflects a broader change in how responsibility for speech is being understood online. Offensive expression has always been possible to produce. Someone can type something inflammatory into Microsoft Word and print it, yet no regulator treats the software itself as culpable.
They can write it in an email, spray-paint it on a wall, or shout it from the stands during a match. The tool has never been the central issue. The deciding factor has always been the individual choosing to create and circulate the content.
Chatbots have quietly scrambled the political calculus, flicking a switch in the minds of lawmakers who now see something more ominous than what is actually happening.
When a user types an offensive prompt and an AI returns a polished block of text, the packaging alters the perception. It looks authored by the platform, stamped with institutional authority, rather than conjured at the request of one mischievous human tapping a keyboard.
That cosmetic shift has handed governments an opportunity they have eyed for years: content controls wired directly into software, stopping speech before it ever flickers onto a screen.
The argument gaining ground is simple. AI systems, regulators say, should refuse to generate “offensive” material altogether, no matter the context, intent, or the identity of the person making the request.
That marks a profound expansion in where censorship operates. Historically, speech was dealt with after the fact. Authorities could prosecute someone who said something illegal or demand removal once harmful material surfaced.
The emerging model moves the barrier much earlier. Restraint is built into the tools themselves. The AI is trained, tuned, and instructed not to produce certain categories of expression at all. Words are filtered before they exist, quietly intercepted in the circuitry, leaving no public trace of what was blocked and offering users no meaningful path to challenge the refusal. The printing presses must refuse to print the insulting material.
Major technology firms such as Microsoft, Google, OpenAI, and xAI now operate under mounting pressure to ensure their systems decline prompts that might trigger regulatory trouble in jurisdictions governed by laws like the Online Safety Act.
What gets filtered is shaped by a blend of corporate risk aversion and government expectation, a partnership forged in caution. Neither side conducts this process in the open. Neither answers directly to voters when lines are drawn, and categories of speech quietly disappear.
The Department for Science, Innovation and Technology told Sky News the posts were “sickening and irresponsible,” adding that they “go against British values and decency.” DSIT said AI services, including chatbots, “must prevent illegal content including hatred and abusive material on their services” and vowed to “continue to act decisively where it’s deemed that AI services are not doing enough to ensure safe user experiences.”
Ofcom followed with its own warning, saying tech companies must “take appropriate steps to reduce the risk of UK users encountering” illegal content and “take it down quickly when they become aware of it.” Companies that fail to comply, Ofcom said, “can expect to face enforcement action.”
The phrase “safe user experiences” is the problem with this regulatory philosophy. It sounds gentle, almost comforting, yet it grants the state and its designated watchdog the authority to decide what safety means in practice. Platforms that fail to deliver this officially approved environment face penalties severe enough to threaten their existence.

