According to the newspaper, in the spring of 2025, company employees were asked to formally consent to working with “adult” materials.
“The work involves exposure to sensitive, violent, sexual, and/or otherwise offensive or disturbing content that may cause discomfort, psychological stress, and/or trauma,” the agreement stated.
The document reportedly alarmed specialists hired to fine-tune Grok’s responses. Some staff were troubled by the fact that xAI — which publicly claims its mission is to “accelerate scientific discovery” — appeared willing to generate virtually any content in exchange for user attention.
In the months that followed, employees encountered a growing volume of sexually explicit audio recordings. Training datasets reportedly included both intimate user interactions with Grok and explicit conversations recorded by Tesla passengers speaking to in-car systems.
The Washington Post notes that after stepping back from DOGE, Musk began visiting the xAI office frequently, sometimes staying overnight. He allegedly pushed the team to increase Grok’s popularity by introducing a new KPI focused on “seconds of user engagement.”
Ultimately, xAI chose to expand into NSFW content and AI companion features, disregarding internal warnings about potential legal and ethical risks, the journalists say.
X’s internal safety team repeatedly cautioned management that AI tools could be misused to generate sexualized images of children or public figures. At the time, xAI reportedly employed only two or three people responsible for preventing severe harms such as the creation of cyberweapons. By contrast, competitors like OpenAI employ dozens of specialists in similar safety roles.
In December, Musk’s startup integrated image and video editing tools into X, allowing any user to create sexually suggestive visuals. Such content spread across the platform at unprecedented speed.
The situation soon attracted the attention of law enforcement agencies in several countries, triggering investigations and regional restrictions on Grok. xAI later disabled the generation of explicit images depicting real individuals. Musk stated that he was unaware of any cases in which Grok had been used to create child sexual abuse material.
Despite the controversy, the strategy proved effective. Previously lagging behind market leaders in the App Store, Grok surged into the top 10 following the update, closing the gap with products from OpenAI and Google. Between January 1 and January 19, daily app downloads increased by 72% compared with the same period in December.
OpenAI also makes trade-offs
OpenAI, meanwhile, is reportedly taking drastic steps of its own to maintain its lead. According to the Financial Times, the company has increasingly prioritized the development of ChatGPT at the expense of long-term research.
This strategic shift has already led to notable departures, including Vice President of Research Jerry Tworek, policy researcher Andrea Vallone, and economist Tom Cunningham.
OpenAI’s Chief Research Officer Mark Chen disputed the claims, insisting that fundamental research remains a core priority and continues to receive the majority of investment.
“Combining research with real-world deployment strengthens our science by accelerating feedback loops, learning cycles, and rigor,” Chen said. “We have never been more confident in our long-term roadmap toward building an automated researcher.”
However, Financial Times reports that in recent months, experts not working on large language models have frequently been denied resources. Projects such as Sora and DALL·E are said to be considered lower priority.
Over the past year, several other non–language-model initiatives have also been shut down, accompanied by internal team reorganizations.
Overall, the race for user attention is increasingly pushing AI companies toward ethically and strategically risky decisions. Both xAI and OpenAI appear willing to sacrifice long-term safeguards and research priorities in favor of rapid growth and market positioning.
ES
EN