The Grok Nudify Controversy Is Another Example of the Need for International AI Regulation

Grok QT
January 15, 2026

Trying to stop non-consensual sexual images is an “excuse for censorship,” according to Elon Musk.

The world’s richest man has started 2026 amid a swirl of controversy, after Grok—the chatbot built into X—was used in what Reuters described as a “mass digital undressing spree.”

Grok responded to user requests to remove clothing from pictures of women without their consent. Among those targeted was Ashley St. Clair, the mother of one of Musk’s children, and according to Reuters there were also several cases where Grok generated sexualized images of children.

In response to the backlash, Grok announced told users on January 9 that only paying subscribers could now use image generation features (although it still remains possible for them to digitally undress pictures of women).

Regulators, however, have belatedly begun to act. Over the weekend of January 10, both Indonesia and Malaysia took the dramatic step of restricting access to Grok until effective safeguards could be introduced. The UK’s media regulator Ofcom has launched an investigation into whether or not X breached UK law, while the EU Commission condemned the chatbot and signaled it would review whether it had broken compliance with the Digital Services Act (DSA).

It’s easy to criticize Grok, as it’s the only mainstream LLM specifically marketed as being sexually permissive. But the controversy also points to one of the most ominous challenges facing generative AI; the ease with which it can supercharge efforts to produce highly realistic nonconsensual sexual imagery (including via “nudify” apps) and/or Child Sexual Abuse Material (CSAM). In May 2024, for instance, a Wisconsin man was charged with producing and distributing thousands of realistic AI-generated images of minors, advertising the sale of the images on Instagram and Telegram.

That same technology is helping to power a wave of deepfake-powered fraud, which are causing enormous costs to businesses. According to IBM, fraud using deepfakes (including generative AI) had a global cost of over $1 trillion in 2024

While cases like these are horrifying, they also act as one of the few remaining areas where there is a bipartisan recognition of how essential it is for laws to catch up to our current digital reality.

In May 2025, the US Senate passed the Take It Down Act, criminalizing the publication of non-consensual intimate imagery, including AI deepfakes. The No Fakes Act, which also was re-introduced in 2025, would grant Americans a new federal right to control their voice and visual likeness. At the state level, Texas has also passed a bill (SB20) explicitly expanding CSAM protections to include AI generated content.

Together, these measures illustrate both the dangers posed by generative AI systems and the degree to which they strain existing governance frameworks. The fragmented global response—both to Grok and AI-enabled nonconsensual imagery generally— shows that policymakers are increasingly wise to Silicon Valley’s tired trope of self-regulation and voluntary safeguards, showcasing instead the need for clear, enforceable guidelines.

The unresolved problem however, is scaling these guidelines. State-level laws, online safety regimes, and international enforcement protocols are triangulating the same set of digital harms but doing so in isolation. The Grok episode shows that reactive moderation is no longer sufficient to combat the malign harms from AI. More coordinated architecture is needed that sets baselines across borders, such as the EU’s General Protection on Data Regulation, introduced in 2018 to give EU citizens more control over their data. Absent such coordination, incidents like the Grok nudifying scandal will not remain isolated controversies but instead proliferate—all while laws lag behind.

Related

See all