Stalker's Best Friend

Woman Sues OpenAI, Saying ChatGPT Unleashed a Vicious Stalker Against Her and Did Nothing When She Begged for Help

"His location and plans are something OpenAI could shed light on if they were willing to cooperate."
Maggie Harrison Dupré Avatar
Close-up of a person's face on the left side, illuminated with red light, showing a serious expression. In the background on the right, a blurred figure wearing a dark hooded garment is partially visible, holding onto a vertical surface, with a blue-toned environment behind them.
Getty / Futurism

A San Francisco woman sued OpenAI last week, alleging that ChatGPT fueled the dangerous delusions of her violent stalker — and that OpenAI failed to intervene even as the woman begged the company for help.

The plaintiff, who filed the case anonymously as “Jane Doe,” claims in the lawsuit that her ex-boyfriend became infatuated with ChatGPT after using the chatbot to talk through their breakup in 2024, according to TechCrunch. The man grew delusional as his use of ChatGPT deepened, and around August 2025, he became convinced that he’d discovered the cure for sleep apnea and that he was being targeted by a high-powered cabal as a result.

As the man’s mental health unraveled, ChatGPT reinforced his delusional and paranoid ideas, allegedly telling him that he was a “level ten in sanity” — and characterizing Doe, who he was obsessed with, as a manipulator.

The man then launched a terrifying ChatGPT-assisted harassment campaign against Doe, according to her lawsuit. This included generating what the complaint characterizes as “dozens of defamatory quasi-psychological reports” about Doe’s mental health, which he then distributed to her friends, family, and professional associates; copying the woman on “emails that had nothing to do with her,” per the suit, including frantic, disorganized messages he was sending to OpenAI about the hundreds of scientific studies he claimed to be working on; and, eventually, making escalating violent threats against Doe and members of her family. The lawsuit adds that Doe contemplated ending her own life to protect her loved ones.

Doe says she contacted OpenAI with evidence of the abuse she was experiencing in November 2025. The company reportedly replied that the abuse Doe outlined was “extremely serious and troubling,” but failed to follow up after promising to look into it.

Remarkably, though, the lawsuit alleges that, by then, OpenAI’s internal moderation systems had already flagged her stalker’s ChatGPT account for content violations concerning “mass casualty weapons.” His access to his paid ChatGPT Pro account was temporarily suspended, but restored after human review, according to TechCrunch. In January 2026 — months after OpenAI had restored the man’s ChatGPT account, and Doe had submitted a manual notice of abuse — the man was arrested on “four felony counts of communicating bomb threats and assault with a deadly weapon,” according to the lawsuit.

“The user’s communications provided unmistakable notice that he was mentally unstable and that ChatGPT was the engine of his delusional thinking and escalating conduct,” the lawsuit alleges. “The user’s stream of urgent, disorganized, and grandiose claims, along with a concrete ChatGPT-generated report targeting Plaintiff by name and a sprawling body of purported ‘scientific’ materials, was unmistakable evidence of that reality.”

“OpenAI did not intervene, restrict his access, or implement any safeguards,” it adds. “Instead, it enabled him to continue using the account and restored his full Pro access.”

In a temporary restraining order filed alongside the lawsuit, Doe urges OpenAI to suspend her ex’s accounts and preserve his transcript records for discovery. This, she says, is urgent: because of an unfortunate procedural screw-up, her stalker — who is unstable to the degree that he was found incompetent to stand trial — was released back to the public, and she now fears for her safety.

“ChatGPT told the user what he wanted to hear: that [Doe] was manipulative and a wrongdoer,” reads the filing, “and that he was a rational, justified actor.”

Doe believes that her stalker’s transcripts will provide her with “basic information she needs to protect herself” against someone who has repeatedly threatened to harm her, per the filing.

The stalker has “threatened our client since being released,” J. Eli Wade-Scott, one of Doe’s lawyers, told the San Francisco Standard. “His location and plans are something OpenAI could shed light on if they were willing to cooperate.”

In a statement, OpenAI told SF Standard that it had suspended the man’s accounts, but has yet to acquiesce to her other requests.

“We are reviewing the plaintiff’s filing to understand the details, and with current information, we’ve identified and suspended relevant user accounts,” OpenAI told the newspaper.

The incident is not an outlier. In December of last year, a 31-year-old Pennsylvania named Brett Dadig was indicted for stalking at least 11 different women in multiple states as ChatGPT reinforced his violent and misogynistic delusions about women, as detailed by Rolling Stone. And in February, Futurism reported that we’d identified at least ten different cases in which ChatGPT or another chatbot had fed a user’s obsessive — and in some situations dangerous — fixations on another person, resulting in domestic abuse, harassment, and stalking.

More on AI and stalking: AI Delusions Are Leading to Domestic Abuse, Harassment, and Stalking