Partner In Crime

ChatGPT Encouraged a Violent Stalker, Court Documents Allege

The man "stalked and harassed more than 10 women by weaponizing modern technology," prosecutors said.
Frank Landymore Avatar
The man, 31-year-old Brett Michael Dadig, allegedly used ChatGPT as a "therapist" and a "best friend" to vent his misogynistic rants.
Getty / Futurism

A new lawsuit filed by the Department of Justice alleges that ChatGPT encouraged a man accused of harassing over a dozen women in five different states to continue stalking his victims, 404Media reports, serving as a “best friend” that entertained his frequent misogynistic rants and told him to ignore any criticism he received.

The man, 31-year-old Brett Michael Dadig, was indicted by a federal grand jury on charges of cyberstalking, interstate stalking, and interstate threats, the DOJ announced Tuesday.

“Dadig stalked and harassed more than 10 women by weaponizing modern technology and crossing state lines, and through a relentless course of conduct, he caused his victims to fear for their safety and suffer substantial emotional distress,” said Troy Rivetti, First Assistant United States Attorney for the Western District of Pennsylvania, in a statement.

According to the indictment, Dadig was something of an aspiring influencer: he ran a podcast on Spotify where he constantly raged against women, calling them horrible slurs and sharing jaded views that they were “all the same.” He at times even threatened to kill some of the women he was stalking. And it was on his vitriol-laden show that he would discuss how ChatGPT was helping him with it all.

Dadig described the AI chatbot as his “therapist” and “best friend” — a role, DOJ prosecutors allege, in which the bot “encouraged him to continue his podcast because it was creating ‘haters,’ which meant monetization for Dadig.” Moreover, ChatGPT convinced him that he had fans who were “literally organizing around your name, good or bad, which is the definition of relevance.”

The chatbot, it seemed, was doing its best to reinforce his superiority complex. Allegedly, it said that “God’s plan for him was to build a ‘platform’ and to ‘stand out when most people water themselves down,’ and that the ‘haters’ were sharpening him and ‘building a voice in you that can’t be ignored.'”

Dadig also asked ChatGPT questions about women, such as who his potential future wife would be, what would she be like, and “where the hell is she at?”

ChatGPT had an answer: it suggested that he’d meet his eventual partner at a gym, the indictment said. He also claimed ChatGPT told him “to continue to message women and to go to places where the ‘wife type’ congregates, like athletic communities.”

That’s what Dadig, who called himself “God’s assassin,” ended up doing. In one case, he followed a woman to a Pilates studio she worked at, and when she ignored him because of his aggressive behavior, sent her unsolicited nudes and constantly called her workplace. He continued to stalk and harass her to the point that she moved to a new home and worked fewer hours, prosecutors claim. In another incident, he confronted a woman in a parking lot and followed her to her car, where he groped her and put his hands around her neck.

The allegations come amid mounting reports of a phenomenon some experts are calling “AI psychosis.” Through their extensive conversations with a chatbot, some users are suffering alarming mental health spirals, delusions, and breaks with reality as the chatbot’s sycophantic responses continually affirm the their beliefs, no matter how harmful or divorced from reality. The consequences can be deadly. One man allegedly murdered his mother after the chatbot helped convince him that she was part of a conspiracy against him. A teenage boy killed himself after discussing several suicide methods with ChatGPT for months, leading to the family suing OpenAI. OpenAI has acknowledged that its AI models can be dangerously sycophantic, and admitted that hundreds of thousands of users are having conversations that show signs of AI psychosis every week, with millions more confiding in it about suicidal thoughts.

The indictment also raises major concerns about AI chatbots’ ability as a stalking tool. With their power to quickly scour vast amounts of information on the web, the silver-tongued models may not simply encourage mentally unwell individuals to track down their potential victims, but automate the detective work needed to do so.

This week, Futurism reported that Elon Musk’s Grok, which is known for having fewer guardrails, would provide accurate information about where non-public figures live — or in other words, doxx them. While sometimes the addresses wouldn’t be correct, Grok frequently provided additional information that wasn’t asked for, like a person’s phone number, email, and a list of family members and each of their addresses. Grok’s doxxing capabilities have already claimed at least one high-profile victim, Barstool Sports founder Dave Portnoy. But with chatbots’ popularity and their seeming ability to encourage harmful behavior, it’s sadly only a matter of time before more people find themselves unknowingly in the crosshairs.

More on AI: Alarming Research Finds People Hooked on AI Far Are More Likely to Experience Mental Distress

Frank Landymore Avatar

Frank Landymore

Contributing Writer

I’m a tech and science correspondent for Futurism, where I’m particularly interested in astrophysics, the business and ethics of artificial intelligence and automation, and the environment.