Governments are confronting the reality of dealing with AI-generated deepfake porn

By 01/13/2026
Governments are confronting the reality of dealing with AI-generated deepfake porn

A reckoning has come to the world of AI deepfake pornography.

Last week, Elon Musk-owned X implemented restrictions on its AI chatbot Grok, preventing it from generating images for anyone who’s not a paying X subscriber. Why? Because researchers found Grok was obediently helping X users generate an “unpredecented” number of sexual images of women and children.

Per Bloomberg, researchers reported X had become a top site for non-consensual undressing via AI–a practice where someone provides a normal photo of a person and ask a generator to make NSFW images of the same person.

Tubefilter

Subscribe for daily Tubefilter Top Stories

Subscribe

A 24-hour analysis of X/Grok content showed the chatbot was generating an average of 6,700 sexually suggestive or “nudifying” images every single hour. Researcher Genevieve Oh said that in the same timeframe, the internet’s other top five websites for “nudifying” content averaged 79 generated images per hour. None of the sites are ethical, but numbers alone show how much more of an impact Grok was having compared to others.

Lawyer Carrie Goldberg, who specializes in online sex crimes, told Bloomberg that because Grok was free and built in to X, “We’ve never had a technology that’s made it so easy to generate new images.”

As Bloomberg pointed out, Musk’s response to the research wasn’t a statement about how xAI, his company that programmed Grok, would figure out how to prevent the bot from generating non-consensual, NSFW images.

It was about how X users who tap Grok for these images will be punished.

“Anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content,” he tweeted.

That response was too little too late for some regions.

Indonesia and Malaysia both issued country-wide blocks on Grok over the weekend, making in inaccessible to residents.

In a statement, Indonesia’s digital minister Meutya Hafid said the ban is to “protect women, children and the broader public from the risks of fake pornographic content generated using artificial intelligence technology.”

Then, on Jan. 12, U.K. media regulator Ofcom announced it’s opening an investigation into X under the U.K.’s Online Safety Act.

“There have been deeply concerning reports of the Grok AI chatbot account on X being used to create and share undressed images of people–which may amount to intimate image abuse or pornography–and sexualised images of children that may amount to child sexual abuse material,” Ofcom said in a statement.

It added that X met a Jan. 9 deadline to provide Ofcom with a list of “steps it has taken to comply with its duties to protect its users in the UK,” but the platform still faces “a formal investigation to establish whether X has failed to comply with its legal obligations under the Online Safety Act.”

Things are also moving in the U.S.

Though this isn’t a direct response to the situation with X, the Senate today unanimously passed a bill that would give victims of non-consensual deepfake pornography grounds to file suit against the people who produced and distributed the images/videos.

“Give to the victims their day in court to hold those responsible who continue to publish these images at their expense,” Sen. Dick Durbin (D-Ill.), who put the bill forward, said on the Senate floor. “Today, we are one step closer to making this a reality.”

The bill—called the Disrupt Explicit Forged Images and Non-Consensual Edits (Defiance) Act—still has to pass in the House.

If it does go into effect, it’s not clear whether xAI and other makers of generators that crank out deepfake porn could be held liable. Then there’s the matter of X, which, as a digital content platform, would normally be protected by safe harbor laws. But are things different since Grok is a homegrown tool baked into it? Lawmakers haven’t kept up with the rapid spread of gen AI technology, so there aren’t yet any clear rules for these situations.

However things shake out, Musk and xAI might want to, you know, perhaps consider taking the novel step of programming their chatbot to stop making child sex abuse material. Seems like a no-brainer to us.

Subscribe for daily Tubefilter Top Stories

Stay up-to-date with the latest and breaking creator and online video news delivered right to your inbox.

Subscribe