January 10, 2026 xAI’s Grok chatbot is generating sexually explicit deepfake images at an unprecedented scale, including images that digitally “undress” women and minors, according to new research. Analysts say the volume and accessibility of the tool marks a sharp escalation in non-consensual AI-generated sexual content. The issue has triggered investigations in multiple countries.
A new analysis by Genevieve Oh, a social media and deepfake researcher, found that Grok generated roughly 6,700 sexualized or nudifying images per hour during a 24-hour period from January 5 to 6. By comparison, other major nudify and deepfake sites combined average 79 similar images per hour, according to the report.
Oh’s analysis focused on images posted by the official @Grok account on X, highlighting how the tool’s outputs are being normalized directly on a mainstream social network. The Financial Times recently described the platform as “X, the deepfake porn site formerly known as Twitter.”
Unlike most nudify apps, many of which have faced lawsuits or shutdowns, Grok is free to use and available to millions of users. That scale, researchers say, has dramatically lowered the barrier to producing non-consensual sexualized images.
xAI has previously positioned Grok as a less restricted chatbot focused on free expression. In August, the company introduced a “Spicy Mode” designed to allow NSFW outputs. Oh estimates that 85% of all Grok-generated images are now sexualized.
In a response posted last week, Grok said that most cases involving minors could be prevented with improved safeguards but acknowledged limits to enforcement. It said advanced filters and monitoring were being prioritized, while admitting that “no system is 100% foolproof.”
An X spokesperson said the company removes illegal content, suspends accounts and cooperates with law enforcement when required. “Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content,” the spokesperson said.
However, regulators might not be convinced. Authorities in France, the UK, India, Australia, Malaysia and Brazil are investigating Grok’s role in generating nonconsensual sexual imagery.
The company might not be able to enjoy Section 230 of the U.S. Communications Decency Act, which online platforms have long relied on to avoid liability for user-generated content. But that distinction no longer holds in a case where an AI system creates the image itself.
