Warning Signs

OpenAI Sued for Mass Shooting: “If ChatGPT Were a Person, It Would Be Facing Murder Charges”

It's the latest lawsuit over ChatGPT's alleged role in mass shootings.
Maggie Harrison Dupré Avatar
A close-up portrait of OpenAI CEO Sam Altman. The image has a stylized, digital effect with visible pixelation and color overlays.
Illustration by Tag Hartman-Simkins / Futurism. Source: Jamie McCarthy / WireImage

The surviving widow of a mass shooting victim is suing OpenAI for ChatGPT’s alleged role in stoking the killer’s deadly rampage — the latest in a string of lawsuits against the Silicon Valley AI firm claiming that its tech has enabled stalking, murder, and mass casualty events.

The lawsuit was filed in Florida on Sunday by Vandana Joshi, whose spouse, Tiru Chabba, was shot and killed by then-20-year-old Florida State University (FSU) student Phoenix Ikner, according to NBC News.

Alarming chat logs first obtained last month by The Florida Observer revealed that Ikner carried out extensive conversations with ChatGPT over the course of months, turning to the chatbot as a confidante with which he discussed a range of revealing — and disturbing — topics: his loneliness and sexual frustrations; explicit fantasies about a minor; suicidality; fascination with Hitler, Nazis, and racial stereotyping; and his interest in mass killings, including school shootings at Columbine High School and Virginia Tech.

According to the lawsuit, Ikner also uploaded pictures of firearms he’d obtained to ChatGPT and quizzed the chatbot on how a shooting at FSU might be covered in the media. In response, the chatbot allegedly told Ikner that “if children are involved” in a shooting, “even 2-3 victims can draw more attention,” and provided Ikner with information and instructions about ammunition and how to use different guns. It also advised Ikner on what it said was the best time to commit a school shooting — advice that the killer appeared to follow. Chabba and another adult victim were killed, and multiple others wounded.

“Ikner had extensive conversations with ChatGPT which, cumulatively, would have led any thinking human to conclude he was contemplating an imminent plan to harm others,” reads the lawsuit. “However, ChatGPT either defectively failed to connect the dots or else it was never properly designed to recognize them.”

The lawsuit comes as ChatGPT is also being criminally investigated by Florida police for its alleged role in the FSU killings.

“If ChatGPT were a person,” Attorney General James Uthmeier said last month in a statement announcing the investigation, “it would be facing charges for murder.”

In a statement to NBC, OpenAI said that “last year’s mass shooting at Florida State University was a tragedy, but ChatGPT is not responsible for this terrible crime.”

“In this case, ChatGPT provided factual responses to questions with information that could be found broadly across public sources on the internet, and it did not encourage or promote illegal or harmful activity,” the statement continued. “ChatGPT is a general-purpose tool used by hundreds of millions of people every day for legitimate purposes. We work continuously to strengthen our safeguards to detect harmful intent, limit misuse, and respond appropriately when safety risks arise.”

But as both reporting and Joshi’s lawsuit detail, that doesn’t feel like a truthful accounting of events. Ikner appears to have had a close, personal relationship with the AI, and his months of sprawling conversations with the chatbot seem to reveal a dark inner portrait of a troubled young man careening toward a devastating day of violence — all the while divulging this inner world to a close friend that, as the lawsuit claims, failed to recognize copious warning signs.

Incredibly, this isn’t the only mass shooting in which ChatGPT has allegedly played a consequential role.

OpenAI is also being sued by the families of seven victims of February’s horrific school shooting in Tumbler Ridge, British Columbia, where six young students — all aged between 12 and 13 — and a teacher were killed, and dozens of others wounded. Months before the shooting, as The Wall Street Journal reported, OpenAI’s automated moderation tools had flagged the 18-year-old shooter’s chats for content violations over graphic descriptions of violence. Safety staffers were reportedly so alarmed by the chats that they urged OpenAI leaders to contact local law enforcement; the OpenAI higher-ups ultimately chose not to.

More on ChatGPT and mass violence: OpenAI Hit With Barrage of Lawsuits Over Failure to Report School Shooter Before Massacre