AI Ethics
Suicide is the 10th leading cause of death in the United States and in Canada, it is the second leading cause of death for people aged 10 to 19. Globally, roughly 800,000 people die by suicide every year. Unfortunately, only 60 countries have up-to-date, quality data on suicide, and 28 have a reported national strategy for how to handle and prevent suicide.
Canada has recently taken a stand against suicide, and the government has hired an Ottawa-based company which specializes in both social media and artificial intelligence (AI) to identify online trends and find patterns of suicide-related behaviors.
The major goal of this project is to "define 'suicide-related behavior' on social media and use that classifier to conduct market research on the general population of Canada," according to a document published to the Public Works website.
This project is only a pilot, at least for the time being. It will be trialed for three months, at which point the Canadian government will, according to their published document, "determine if future work would be useful for ongoing suicide surveillance."
The Public Health Agency of Canada (PHAC) said, about this pilot program, that "to help prevent suicide, develop effective prevention programs, and recognize ways to intervene earlier, we must first understand the various patterns and characteristics of suicide-related behaviors. PHAC is exploring ways to pilot a new approach to assist in identifying patterns, based on online data, associated with users who discuss suicide-related behaviors."
Preventing Suicide
The company, Advanced Symbolics Inc., thinks that their approach, which uses AI and market research to find trends, is more accurate and capable than other systems. The company's CEO Erin Kelly even stated that "we're the only research firm in the world that was able to accurately predict Brexit, the Hillary and Trump election, and the Canadian election of 2015."
While this seems like an advanced and positive effort towards reducing suicide rates, there are some ethical concerns that have arisen. At first, some were concerned that this system targeted individuals it thought were suicidal or at-risk, but the company then explained that it actually located trends and didn't seek out individuals, which could be considered a privacy violation.
This effort is similar to a 2017 effort by Facebook to use AI to monitor posts that seemed to have suicidal tendencies. The system would send messages to the user and perhaps their friends if this were the case. Unfortunately, this system seemed to heavily infringe into an individual's personal, online space.
While Advanced Symbolics's system would only monitor public posts, looking for trends, it would certainly change change the landscape of social media. Overall, this system could have an enormously positive impact on suicide rates and the ability of communities to predict and respond to at-risk groups and circumstances. But, if this becomes a more widely-adopted system, will social media users not be as open about their lives in public posts? Will there be a point at which this system is no longer effective?
It's difficult to determine, but for the time being it's reassuring to know that the system does not seem to infringe on personal privacy, focusing on trends instead. As Kenton White, chief scientist with Advanced Symbolics put it, "It'd be a bit freaky if we built something that monitors what everyone is saying and then the government contacts you and said, 'Hi, our computer AI has said we think you're likely to kill yourself'."
The company's goal is instead to identify areas where the potential for multiple suicides is high. Advanced Symbolics believes that their AI could provide a warning of between two and three months before a suicide spike happens. The government could then react accordingly, providing resources and healthcare.
Share This Article