social network

Opinion: The suicide prediction system among Facebook users is dangerous and requires power regulation .

Since 2017, this technology with different success rates helps prevent suicides, but it is a risky game with people’s lives.

Getty Photos

In May 2017, Facebook announced plans to hire three thousand moderators to track murder and suicide videos. Their goal is to be able to prevent acts of violence or suicide that are broadcast on the social network. As Mark Zuckerberg explained, the Facebook team had previously managed to prevent a suicide attempt on the air, but “in other cases we might not be lucky,” said the head of the company.

Since then, the system has expanded from the United States to other countries, including India. Behind the technology is a train of both bad and good stories of saving potential suicides. However, The Guardian columnist and healthcare professional Mason Marks believes that Facebook has taken on too much responsibility. And this should end in the interest of health.

How the system works

At the end of December 2018, a patrolman in an unnamed city in Ohio received a call from Facebook. The moderators noticed the post of a user of the social network, where she promised to commit suicide in her home, and asked the police for help. The officer quickly reached the address, but the woman who opened it said that she had no thoughts about suicide.

The patrolman was afraid that the woman could have lied to him, and advised her to go to the hospital, either on her own or accompanied by the police. The woman agreed to the second option. According to The New York Times, such stories are not uncommon. Since about the end of 2017, police stations from Massachusetts to Mumbai have received notifications from Facebook because of concerns that a particular user might harm themselves.

[perfectpullquote align=”full” bordertop=”false” cite=”” link=”” color=”” class=”” size=””]According to the World Health Organization, suicide is the second leading cause of death for people between 15 and 29 years old worldwide. As pointed out by The New York Times, an anti-suicidal campaign for Facebook can be a way to fix its reputation in the face of numerous scandals .[/perfectpullquote]

The system works as follows: social network algorithms scan posts, comments and videos of users around the world for text that indicates a person’s suicidal attitude. When a technology notices something suspicious, it notifies a live moderator who checks the post and, if required, contacts the police in a particular country. It is not known whether Facebook moderators have ever communicated with Russian police officers. While the system operates only in English, Spanish, Portuguese and Arabic.

According to Facebook, for the sake of preserving the personal data of users, the company does not check the outcome of the situation after moderators notify the police. This is where the work and interest of the social network ends. It is not completely clear how the administration assesses the need to call the police or ignore the user’s post.

Job Description of Facebook Algorithms that Rate User Posts by Suicidal Risk Level
Job Description of Facebook Algorithms that Rate User Posts by Suicidal Risk Level

Facebook assured that the company worked with experts in the field of suicide risk prevention to create the system. They also advised social network moderators who have experience in law enforcement and in crisis situations. Users do not have the ability to prevent algorithms from scanning their publications for suicidal risk.

In May 2017, Facebook representatives helped the police of the city of Rock Hill in South Carolina find a person who broadcast suicide training on Facebook Live. In a conversation with an officer, the moderators described a video background, where trees and a road sign were visible, and also provided the coordinates of the phone’s location for the man. Thanks to this, the police found the right place in time and took the social network user to the hospital.

[perfectpullquote align=”full” bordertop=”false” cite=”” link=”” color=”” class=”” size=””]“Two people called the police that night, but they could not tell us where the man was. And Facebook was able to, ”says police officer Rock Hill Courtney Davis.[/perfectpullquote]

Law enforcement agencies from the city of Mashpi in Massachusetts have a different experience with the Facebook system. On August 23, 2017 at about five in the morning, a message was received on the dispatcher’s console about a man who aired the suicide preparation on Facebook Live. Police arrived at the scene a few minutes later, but by then the man had passed away. Only about 40 minutes after the tragedy, the Facebook moderator called the police with a notification of the attempted suicide of the already deceased person.

A similar incident occurred in the city of Macon in Georgia, but then everything ended well – by the time when representatives of the social network turned to the police, the patrolmen were already in the house of the suspect in a suicide attempt.

What is the critic of Mason Marx

One of the problems of the Facebook system is that a healthcare specialist considers the lack of regulation by the state. The authorities do not pay attention to the technology while it collects medical data about users, assesses their suicidal risk and intervenes in the process of saving people, risking to aggravate the situation. In other words, Facebook assumes the role of a health care provider, but does not obey anyone.

Marx acknowledges that Facebook’s intentions are good, but such intervention can lead to unforeseen risks. People who are suspected of being a moderator in a suicide attempt may be forced to go to the hospital or be forced to take medicine.

[perfectpullquote align=”full” bordertop=”false” cite=”” link=”” color=”” class=”” size=””]And rejection of this will lead to conflict with the police, apartment searches or even social stigmatization of people with a high (according to Facebook) level of suicide risk.[/perfectpullquote]

According to representatives of the social network, suicide prediction technology is not a health assessment tool, but simply helps those in need. However, Marx points out that algorithms assign users to risk assessments ranging from zero to one, where the highest scores reflect the greatest risk of suicide. This practice is similar to the official US Department of State suicide forecasting program.

The project in this area worked from 2011 to 2015 and analyzed the activity of veterans in social networks to calculate the risk of suicide. However, unlike the Facebook system, the Department’s project was governed by patient protection laws and research subjects. Mark Zuckerberg’s company does neither the one nor the other, Marx points out.

[perfectpullquote align=”full” bordertop=”false” cite=”” link=”” color=”” class=”” size=””]Current healthcare laws are not enough to protect consumers because Facebook’s efforts are part of a trend in which technology companies take on the role historically assigned to doctors and medical equipment manufacturers.[/perfectpullquote]
Mason marx
health specialist

The specialist points out similar problems in other companies – Apple’s new “smart watches” check the heart for signs of arrhythmia, but the company is not obliged to comply with the Law on Portability and Accountability of Medical Information. The situation is similar to Google’s smart home patent, which, in theory, will be able to identify disorders associated with the use of psychoactive substances and early signs of Alzheimer’s disease based on video and audio signals.

Marx is sure – to assume that these companies do not enter the territory of medical practice is wrong. “It’s the same as saying that when doctors use an X-ray machine, this is medical practice, and when Silicon Valley start-ups do the same thing, this is not related to practical medicine,” says the specialist.

Marx believes that such logic is meaningless, since x-rays (like Facebook), regardless of the context, are dangerous for people without proper training and certification.

[perfectpullquote align=”full” bordertop=”false” cite=”” link=”” color=”” class=”” size=””]“Exactly the same Facebook suicide prediction technology, affecting thousands of people, is dangerous in untrained hands, but regulators allow it to work without supervision.”[/perfectpullquote]

Facebook supporters may say that the social network is not engaged in practical medicine, as it does not diagnose. However, this is a false thesis, since suicidal feelings are officially registered in the International Classification of Diseases.

Advocates of the social network can also say that the company’s system does not make diagnoses, and technology evaluations are only statistical conclusions without 100% certainty. “But the diagnoses made by doctors are also rarely reliable, and are often expressed as percentages or probabilities,” Marx retorts.

Doctors collect the patient’s history and study his condition, making conclusions based on this information, his experience and knowledge. Facebook moderators act in exactly the same way, the only difference is in the way information is collected, the expert believes. And this is a big threat if the authorities continue to ignore the initiatives of the IT giant.

[perfectpullquote align=”full” bordertop=”false” cite=”” link=”” color=”” class=”” size=””]Facebook and other social networks called the new regulators. They influence language, democratic processes and social norms. And all this while they are quietly becoming new medical regulators. This is the creation, testing and implementation of health technologies without external supervision or accountability. For universal security, state and federal agencies should pay attention to this.[/perfectpullquote]
Mason marx
health specialist
Back to top button
Close
Close