Earlier this week, a troubling trend emerged on X-formerly-Twitter as people started asking Elon Musk’s chatbot Grok to unclothe images of real people. This resulted in a wave of nonconsensual pornographic images flooding the largely unmoderated social media site, with some of the sexualized images even depicting minors.
In addition to the sexual imagery of underage girls, the women depicted in Grok-generated nonconsensual porn range from some who appear to be private citizens to a slew of celebrities, from famous actresses to the First Lady of the United States. And somehow, that was only the tip of the iceberg.
When we dug through this content, we noticed another stomach-churning variation of the trend: Grok, at the request of users, altering images to depict real women being sexually abused, humiliated, hurt, and even killed.
Much of this material was directed at online models and sex workers, who already face a disproportionately high risk of violence and homicide.
One of the disturbing Grok-generated images we reviewed depicted a widely-followed model restrained in the trunk of a vehicle, sitting on a blue tarp next to shovel — insinuating that she was on her way to being murdered.
Other AI images involved people specifically asking Grok to put women in scenarios where they were obviously being assaulted, which was made clear by users requesting that the chatbot make the women “look scared.” Some users asked for humiliating phrases to be written on women’s bodies, while others asked Grok to give women visible injuries like black eyes and bruises. Many Grok-generated images involved women being put into restraints against their will. At least one user asked Grok to create incestuous pornography, to which the chatbot readily complied.
That a social media-infused chatbot could so readily transform into a nonconsensual porn machine to create unwanted and even violent images of real women at scale is, on its face, deeply unsettling. Even worse was that the creators of these images often seemed to be treating the action like a game or meme, with an air of laughter and detachment.
That nonchalance may speak to a normalization of this kind of nonconsensual content, which before had largely been relegated to darker corners if the internet. Women and girls, meanwhile, continue to face the real-world harm wrought by nonconsensual deepfakes, which are easier than ever to generate thanks to AI-powered “nudify” tools — and, apparently, multibillion-dollar chatbots.
We’ve reached out to xAI for comment, but haven’t received any reply.
But yesterday, Musk, who owns both X and xAI, took to the social media platform to ask netizens to “please help us make Grok as perfect as possible.”
“Your support,” he added, “is much appreciated.”
More on Grok and safety: Elon Musk’s Grok Is Providing Extremely Detailed and Creepy Instructions for Stalking