Elon Musk’s social network X restricted Grok’s image generation tool after thousands of sexually explicit images were created following an update in late December. The feature is now available only to paying subscribers with verified payment details, allowing the platform to identify potential abusers, The Guardian reported.
UK Prime Minister Keir Starmer described the content as illegal and unacceptable, while media regulator Ofcom is reviewing possible enforcement actions under the country’s new Online Safety Act. The law grants regulators broader powers to fine or sanction platforms that fail to prevent harmful or illegal content.
Despite the restrictions on X, users have continued to generate sexualized material through a separate app, Grok Imagine. According to AI Forensics, the tool has been used to create more than 800 pornographic and violent images and videos, raising concerns that existing safeguards remain insufficient.
The Grok controversy is likely to become an early test case for how the UK’s Online Safety Act applies to generative AI, signaling that partial feature restrictions will no longer be viewed as sufficient safeguards. In the medium term, this increases the risk of targeted enforcement against X and accelerates Europe’s shift toward stricter licensing, traceability, and user-level accountability for AI systems.