Facebook on Monday said stepping up the use of artificial intelligence to identify members of the leading social network who may be thinking of suicide.
Software will look for clues in posts or even in videos being streamed at Facebook Live, then fire off reports to human reviewers and speed up alerts to responders trained to help, according to the social network.
“This approach uses pattern recognition technology to help identify posts and live streams as likely to be expressing thoughts of suicide,” Facebook vice president of product management Guy Rosen said in a blog post.
Signs watched for were said to include texts by people or comments to them, such as someone asking if they are troubled.
Facebook already has tools in place for people to report concerns about friend’s who may be considering self-harm, but the software can speed the process and even detect signs people may overlook.
“There have been terribly tragic events — like suicides, some live-streamed — that perhaps could have been prevented if someone had realized what was happening and reported them sooner,” Facebook chief executive Mark Zuckerberg said early this year in a post at the social network focused on building global community.
“Artificial intelligence can help provide a better approach.”
Facebook is rolling out the artificial intelligence tool outside the US and planned to make it eventually available everywhere except the European Union, where data usage is restricted by privacy regulations.
Facebook has been collaborating with mental health organizations for about a decade on ways to spot signs users may be suicidal and get them help.