Mea Culpa

Sam Altman Issues Grim Apology

"I am deeply sorry that we did not alert law enforcement to the account that was banned in June."
Victor Tangermann Avatar
OpenAI CEO Sam Altman has issued a grim apology, admitting that the firm had failed to notify law enforcement ahead of a deadly shooting.
Alex Wong/Getty Images/Futurism

In February, an 18-year-old named Jesse Van Rootselaar killed eight people and herself — while wounding dozens more — in a rampage that started at her home and continued at a high school in Tumbler Ridge, British Columbia.

Investigators later learned that Van Rootselaar’s ChatGPT account had been flagged and banned by OpenAI’s staff for describing “scenarios involving gun violence” — many months before the massacre took place.

Yet OpenAI failed to notify law enforcement, raising thorny ethical questions regarding the pervasive role the tech plays in modern society and how it’s facilitating plenty of highly troubling behavior, from stalking to violence and murder.

Now, OpenAI CEO Sam Altman has issued a grim apology, admitting that the firm has fallen short of its responsibility.

“I am deeply sorry that we did not alert law enforcement to the account that was banned in June,” he wrote in an open letter, dated April 23, and addressed to the Tumbler Ridge community. “While I know words can never be enough, I believe an apology is necessary to recognize the harm and irreversible loss your community has suffered.”

“I want to express my deepest condolences to the entire community,” he wrote. “No one should ever have to endure a tragedy like this. I cannot imagine anything worse in this world than losing a child.”

BC premier David Eby was left unimpressed by Altman’s mea culpa.

“The apology is necessary, and yet grossly insufficient for the devastation done to the families of Tumbler Ridge,” he replied in a tweet.

In the aftermath of the event, OpenAI vowed to make changes.

“Mental health and behavioural experts now help us assess difficult cases, and we have made our referral criteria more flexible to account for the fact that a user may not discuss the target, means and timing of planned violence in a ChatGPT conversation but that there may be potential risk of imminent violence,” OpenAI head of global policy Ann O’Leary wrote in a letter at the time.

“With the benefit of our continued learnings, under our enhanced law enforcement referral protocol, we would refer the account banned in June 2025 to law enforcement if it were discovered today,” she added.

In his latest apology, Altman similarly promised to “find ways to prevent tragedies like this in the future” by working with “all levels of government.”

The Tumbler Ridge shooting wasn’t the only recent bloodbath to involve ChatGPT. Roughly ten months earlier, Florida State University student Phoenix Ikner killed two people and injured seven others on the college’s campus.

As recently publicized transcripts show, Ikner had extensive and deeply troubling conversations with the chatbot, including detailed discussions and plans for the shooting.

More on the shootings: The Florida Mass Shooter’s Conversations With ChatGPT Are Worse Than You Could Possibly Imagine

I’m a senior editor at Futurism, where I edit and write about NASA and the private space sector, as well as topics ranging from SETI and artificial intelligence to tech and medical policy.