Grok generating non-consensual sexualized images in thousands hourly, researcher finds 

January 10, 2026 xAI’s Grok chatbot is generating sexually explicit deepfake images at an unprecedented scale, including images that digitally “undress” women and minors, according to new research. Analysts say the volume and accessibility of the tool marks a sharp escalation in non-consensual AI-generated sexual content. The issue has triggered investigations in multiple countries.

A new analysis by Genevieve Oh, a social media and deepfake researcher, found that Grok generated roughly 6,700 sexualized or nudifying images per hour during a 24-hour period from January 5 to 6. By comparison, other major nudify and deepfake sites combined average 79 similar images per hour, according to the report.

Oh’s analysis focused on images posted by the official @Grok account on X, highlighting how the tool’s outputs are being normalized directly on a mainstream social network. The Financial Times recently described the platform as “X, the deepfake porn site formerly known as Twitter.”

Unlike most nudify apps, many of which have faced lawsuits or shutdowns, Grok is free to use and available to millions of users. That scale, researchers say, has dramatically lowered the barrier to producing non-consensual sexualized images.

xAI has previously positioned Grok as a less restricted chatbot focused on free expression. In August, the company introduced a “Spicy Mode” designed to allow NSFW outputs. Oh estimates that 85% of all Grok-generated images are now sexualized.

In a response posted last week, Grok said that most cases involving minors could be prevented with improved safeguards but acknowledged limits to enforcement. It said advanced filters and monitoring were being prioritized, while admitting that “no system is 100% foolproof.”

An X spokesperson said the company removes illegal content, suspends accounts and cooperates with law enforcement when required. “Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content,” the spokesperson said.

However, regulators might not be convinced. Authorities in France, the UK, India, Australia, Malaysia and Brazil are investigating Grok’s role in generating nonconsensual sexual imagery.

The company might not be able to enjoy Section 230 of the U.S. Communications Decency Act, which online platforms have long relied on to avoid liability for user-generated content. But that distinction no longer holds in a case where an AI system creates the image itself.

Top Stories

Related Articles

April 3, 2026 The CEO of NYC Health + Hospitals says artificial intelligence could replace a significant portion of radiology more...

April 3, 2026 OpenAI has signed Smartly as its first dedicated adtech partner to refine how advertising appears in ChatGPT. more...

April 2, 2026 Researchers from California Institute of Technology and start-up Oratomic have demonstrated a new error-correction approach that could more...

April 2, 2026 AMD has agreed to acquire Intel in an all-stock transaction that would combine the two long-time x86 more...

Picture of Mary Dada

Mary Dada

Mary Dada is the associate editor for Tech Newsday, where she covers the latest innovations and happenings in the tech industry’s evolving landscape. Mary focuses on tech content writing from analyses of emerging digital trends to exploring the business side of innovation.
Picture of Mary Dada

Mary Dada

Mary Dada is the associate editor for Tech Newsday, where she covers the latest innovations and happenings in the tech industry’s evolving landscape. Mary focuses on tech content writing from analyses of emerging digital trends to exploring the business side of innovation.

Jim Love

Jim is an author and podcast host with over 40 years in technology.

Share:
Facebook
Twitter
LinkedIn