facebook
Facebook's new AI technology will help moderators find content with potential suicidal undertones. Stephen Lam/Getty Images

One of the unforeseen consequences of Facebook letting any user livestream straight from their phone at any time is that people can use it to broadcast horrible, tragic things before moderation can catch it. Such was the case last month when a man in Turkey (massive trigger warning on this story) broadcast his suicide on Facebook Live. Now, Facebook is using AI to help take preventative measures when someone appears to be at risk of suicide.

Reported by TechCrunch, Facebook is rolling out new systems that will comb over posts in different languages in the hopes of detecting troubling language that hints at suicidal intent. Aside from monitoring the posts themselves, this tech will take comments from other users like “Are you ok?” as a sign that something might be happening. The idea is that when such a post is detected, a litany of resources will be provided to help resolve the problem, like first responder info and even dedicated suicide prevention moderators.

The tech applies to all types of posts, not just text status updates. That means videos, like the aforementioned Turkish livestream, have a whole separate set of red flags the AI is looking for and the human moderation team can review before taking action. In effect, the moderation team can pinpoint the moment in a video when the most people responded with emojis or posted comments to figure out what was so alarming to the audience. Additionally, the AI will help prioritize which posts are surfaced to the moderation team first based on the level of perceived risk.

Facebook reiterated that its methods were developed in accordance with Save.org, the National Suicide Prevention Lifeline and other mental health organizations around the world, meaning this tech hopefully was not developed in a Silicon Valley echo chamber. One other thing worth noting is that, due to privacy laws in that part of the world, some of this tech will not be used in the European Union.

Of course, any use of artificial intelligence to monitor the lives of humans in order to prevent tragedies is going to set off alarms for people. After the switch is flipped on this tech, how long until it accidentally makes a dire situation worse? Thinking even further into the future, how long until someone considers using it to stamp out political dissent or anything else that could be deemed troubling?

As TechCrunch notes, Facebook chief security officer Alex Stamos responded to these concerns on Twitter:

Facebook is at least saying the right things regarding this obviously well-intentioned technology, for now.