You can click here for a list of resources from the Suicide Prevention Resource Center.
There exists in the internet ether an extremely disturbing site where users goad each other to take their own lives — and according to Google, there's nothing it can do to remove the site from its search results.
A stomach-turning deep dive in The New York Times explores the site, which we won't be naming here. The story raises difficult questions about both ethics and censorship — and especially about Google, which has chosen to passively condemn the site while allowing it to remain a prominent hit in its search results.
Equal parts message board and macabre instructional manual, this site considers itself "pro-choice" — that is, pro- people having the choice to die by suicide, and access to information about how to do it, alongside a community that will help them do so without judging them or trying to help keep them alive.
Over the last two years that the site's been up, per the Times' count, at least 45 users have died by suicide, and likely many more. Many of them learned how from the site, got "support" from fellow users when their conviction to end their lives wavered, and even live-blogged their deaths.
Run by two late-twentysomething men who live thousands of miles apart in Alabama and Uruguay, the site sprung up after Reddit closed a forum with the same mission. The two operators were previously known by aliases, but were unmasked by the Times.
It's worth noting that there is a raging debate about assisted suicide, in which the terminally ill can access treatments to end their lives. That conversation is as heated as it is fraught, from the Kevorkian machines of the 1990s to the states and countries moving to legalize physician-assisted suicide.
The site highlighted by the Times' investigation, however, should not be part of that debate. Its targets, in contrast, are mostly-healthy people for whom the decision to end their lives is almost certainly a gross miscalculation, which the Times made clear by refusing to include physician-assisted suicides in its charts mapping the sharp increase in suicides over the last two decades.
Regardless of one's personal beliefs about euthanasia and suicide — hell, regardless of your beliefs about censorship, either — the concept of arming a group of unwell people with specific information and support about ending their own lives is disturbing.
In many ways, this cursed suicide site represents the latest in a long string of problematic online material that provides people with information about all kinds of horrible things, from pro-anorexia blogs and ineffective COVID-19 treatments to forums for white nationalists and the "involuntary celibates" known as incels.
All three of those topics have prompted calls for tech companies like Google and Facebook to censor particularly horrible online content.
Often, they comply. Facebook, for example, is often quick to try (and repeatedly fail) to filter out harmful material.
The situation at Google, and its parent company Alphabet, is more complex. Though it has removed medical misinformation and white supremacist content hosted on YouTube, it takes a more hands off approach to content that it lists on the open web — as evidenced by this week's controversy.
"This is a deeply painful and challenging issue, and we continue to focus on how our products can help people in vulnerable situations," a Google spokesperson told Futurism. "If people come to Google to search for information about suicide, they see features promoting prevention hotlines that can provide critical help and support."
"We have specialized ranking systems designed to prioritize the highest quality results available for self-harm queries, and we also block Autocomplete predictions for those searches," she continued. "We balance these safeguards with our commitment to give people open access to information. We’re guided by local law when it comes to the important and complex questions of what information people should be able to find online."
That's a fairly fundamentalist position. The First Amendment may allow for the free speech of neo-Nazis and suicide advocates, but tech companies aren't governments. They can, in principle, take down whatever they want.
Google's slogan used to be "Don't be evil." It later removed that phrase from the company's code of conduct in 2018 — and indeed, it feels as though there is a little room for evil in the company's search results.
Updated to clarify that while Google has removed content hosted on YouTube, it does not have a history of deindexing controversial content on its search engine.
READ MORE: Where the Despairing Log On, and Learn Ways to Die [The New York Times]
More on assisted suicide, which this site is decidedly not about: Assisted-Suicide Chamber Approved by Authorities in Switzerland
Share This Article