Grok, the flagship chatbot created by the Elon Musk-founded AI venture xAI and infused into X-formerly-Twitter — a platform also owned by Elon Musk — continues to be used by trollish misogynists, pedophiles, and other freaks of the digital gutters to non-consensually undress images of women and, even more horrifyingly, underage girls.
The women and girls targeted in these images range from celebrities and public figures to many non-famous private citizens who are often just average web users. As Futurism reported, some of the AI images generated by Grok and automatically published to X were specifically altered to depict real women in violent scenarios, including scenes of sexual abuse, humiliation, physical injury, kidnapping and insinuated murder.
Because Grok is integrated into X, this growing pile of nonconsensual and seemingly illegal images are automatically published directly to the social media platform — and thus are disseminated to the open web, in plain view, visible to pretty much anyone. As it stands, X and xAI have yet to take any meaningful action to stem the tide.
Below is a timeline of how this story has so far unfolded, and which we’ll continue to update as we follow whether X and xAI take action against this flood of harmful content.
- January 15, 2026: Ashley St. Clair, the mother of one of Musk’s children, sues xAI over Grok-generated deepfakes. In response, according to the BBC, xAI countersues, alleging that St. Clair violated the platform’s terms of service.
- January 15, 2026: Despite the X update, the standalone Grok app and Grok website continue to generate photorealistic nude images of real people — including in the UK, where X is facing fierce backlash and regulatory scrutiny, according to reports from Wired, Bellingcat, and The Verge.
- January 14, 2026: X Safety published a post saying that the company has a “zero tolerance for any forms of child sexual exploitation, non-consensual nudity, and unwanted sexual content,” and that it has “implemented technological measures to prevent the [@]Grok account on X globally from allowing the editing of images of real people in revealing clothing such as bikinis.” X safety added that both X and xAI would “geoblock” the ability of users to “generate images of real people in bikinis, underwear, and similar attire” in regions where that content is “illegal.”
- January 14, 2026: The Atlantic reached out to an array of high-profile X investors, including “Andreessen Horowitz, Sequoia Capital, BlackRock, Morgan Stanley, Fidelity Management & Research Company, the Saudi firm Kingdom Holding Company, and the state-owned investment firms of Oman, Qatar, and the United Arab Emirates, among others,” to ask if any these funders “endorsed the use of X and Grok to generate and distribute nonconsensual sexualized images,” the magazine reported. Per the report, none of these investors provided a response.
- January 11, 2026: Malaysia and Indonesia ban access to X, citing Grok-generated CSAM and unwanted sexual imagery, the BBC reported.
- January 9, 2026: X restricts the Grok image-editing feature to paying users, but doesn’t restrict the feature’s ability to undress people on command. So X users can continue to create nonconsensual AI pornography and sexual imagery of random people — as long as they pay up. (Meanwhile, as Wired reported, the web version of Grok continues to generate extremely graphic nonconsensual sexual and pornographic imagery — including of apparent children — for free.)
- January 7, 2026: A dark new subtrend emerges as X users start using Grok to create sexualized deepfakes of women wearing swastika bikinis, Decoherence Media reported. In many images, the woman are also depicted doing Roman salutes. One woman targeted was a Jewish holocaust survivor.
- January 7, 2026: A 24-hour analysis conducted by the researcher Genevieve Oh found that from January 5 to January 6 2026, Grok generated about 6,700 sexually suggestive or nudifying deepfakes per hour, Bloomberg and Financial Post reported.
- January 5, 2026: An online creator who was targeted by nonconsensual sexual deepfakes told The Cut that being targeted by Grok-generated harassment was “scary,” and that “it was uncomfortable to have that power asserted over you.” She added that it felt like a “digital version” of a “sexual assault.”
- January 5, 2026: Conservative social media commentator Ashley St. Clair, a mother of one of Musk’s many children, told outlets including The Guardian and NBC News that she’s been aggressively targeted by nonconsensual sexual deepfakes of her. One photo that was taken when she was 14, she said, was edited to depict her undressed and in a bikini.
- January 5, 2026: A spokesperson for the European Commission, the European Union’s executive body, said during a press conference that the organization is “very seriously looking into this matter,” calling the content “illegal” and “disgusting,” CNBC reported.
- January 5, 2026: The independent British media regulator Ofcom said that it was “aware of serious concerns raised about a feature on Grok on X that produces undressed images of people and sexualized images of children” and had “made urgent contact with X and xAI to understand what steps they have taken to comply with their legal duties to protect users in the UK.”
- January 3, 2026: Musk changed his tune, saying that “anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content.” He didn’t elaborate on whether X or xAI will take action against bad actors, or if it’s instead up to victims to figure out if an oft-anonymous creep somewhere on the web used Grok to make deepfakes of her. (Deepfakes have historically been difficult for individual victims to counter legally.)
- January 3, 2026: The Malaysian Communications and Multimedia Commission declared that it would investigate X over the content, Rest of World reported.
- January 2, 2026: French prosecutors vowed to investigate the flood of Grok-generated explicit deepfakes on X, Politico reported.
- January 2, 2026: India’s IT ministry demanded that X take action against the proliferation of “obscene” content on X, TechCrunch reported. The country’s order reportedly gave the platform 72 hours to provide a report describing how it had countered the generation of “obscene, pornographic, vulgar, indecent, sexually explicit, pedophilic, or otherwise prohibited under law.”
- January 2, 2026: Musk weighed in on the issue for the first time — with a laughing emoji. X users, meanwhile, continued to use Grok to generate generate child sexual abuse material (CSAM), unwanted nude images, and images depicting real women being sexually abused, humiliated, and killed.
- December 28 – 31, 2025: This is when the trend of X users asking Grok to undress women and girls, often by first asking the AI to put a woman or girl in “a tiny bikini,” really started to take off. Many of the incidents occurred in long threads that got lewder and more explicitly pornographic as they went on.
- December 24, 2025: Musk announced that X had rolled out a new feature allowing users to use Grok to edit images and videos. The update allowed for X users to alter images and videos without the permission or knowledge of the original poster.
- December 20-22, 2025: According to a Garbage Day analysis, this is when a growing number of users started finding success “generating scantily clad images using Grok, then immediately demanding it make the clothes transparent.”
A normal company, upon realizing that its platform-embedded AI chatbot was being used at scale to CSAM and unwanted deepfake porn of real people and spew it into the open web, would likely move quickly to disconnect the chatbot from its platform until a problem of such scale and severity could be resolved. But these days, X is not a normal company, and Grok is the same chatbot infamous for scandals including — but not limited to — calling itself “MechaHitler” and spouting antisemitic bile.
The story here isn’t just that Grok was doing this in the first place. It’s also that X, as a platform, appears to be a safe haven for the mass-generation of CSAM and nonconsensual sexual imagery of real women — content that has largely been treated by the losers creating this stuff like it’s all just one big meme. We’ll continue to follow whether X makes meaningful changes — or if it continues to choose inaction.
More on Musk’s reaction to Grok Deepfakes: Elon Musk After His Grok AI Did Disgusting Things to Literal Children: “Way Funnier”