EU data laws block Facebook's suicide prevention tool

European data laws have prevented Facebook from introducing a tool designed to spot users at risk of suicide.

The social media firm has announced that it will use artificial intelligence to spot posts and video comments that indicate someone is expressing suicidal thoughts.

However, data protection laws across the EU, which ban processing of an individual's sensitive personal data without their explicit permission, mean that the update will not make it to countries in the EU.

Facebook already allows users to report posts when they think the person is at risk, at which point a moderator is alerted and can choose to send the person help such as numbers for helplines or the chance to talk to a friend. However, it says many red flags often go unreported.

Mark Zuckerberg, Facebook's boss, said the company has now started to introduce "proactive detection", which automatically looks for trigger phrases in posts or comments on videos such as "are you OK?” and “can I help?".

After testing it in the US, it said it would introduce it worldwide except for in the EU. A spokesman said that it was a sensitive issue in Europe and as a result it would not be introduced yet.

Sensitive data, such as political opinions, health information and religious beliefs are subject to strict data laws under the EU's Data Protection Directive and the upcoming General Data Protection Regulations.

Ashley Winton, a data protection lawyer at McDermott Will & Emery, said Facebook would have to secure the consent of users in the EU before it could introduce the tool. "Data protection law is tougher in relation to sensitive data," he said.

Facebook has been forced to change its service in the EU before due to scrutiny from regulators over data privacy concerns.

In 2011, Facebook was threatened with legal action in Germany over the facial recognition software used to automatically tag friends in photos, which critics said violated policy and data protection laws. 

The following year it removed a feature that allowed users to tag their friends so it could remain compliant with European Data Protection law.

Moments, an app which scans photos taken on users' smartphones to detect friends in users' photos, is only available as a stripped-back version in Europe and Canada for the same reason. It can be downloaded but does not include the facial recognition technology.

Facebook has been criticised following several high profile cases where extreme acts of violence including shootings, suicide attempts and self harming were broadcast on the website.

In January 2016 Nakia Venant, 14, from Miami was found dead in her bedroom after a friend alerted police of a troubling Facebook Live broadcast.

In July  2016 a live, 10-minute video of a man dying after being shot by police in Minnesota was streamed to the community.  Two months ago Ayhan Uzun, 54, from Turkey, shot himself on the Live feature after his daughter told him she was engaged. 

Facebook said it had been working on its new suicide prevention technique for 10 years. 

It said  it uses pool of "thousands" of volunteers and employees around the world who review reports about content that have been flagged by users. 

This includes a dedicated group of specialists who have specific training in suicide and self harm. In some cases, the proprietary artificial intelligence algorithm has identified videos that may have gone unreported, it added. 

The social network sparked fury earlier this year when it refused to delete videos and images of "violent death", abortion and self-harm because the web giant did not want to censor its users in May this year.