قالب وردپرس درنا توس
Home / Health / Experts question the suicide prevention measures of Facebook

Experts question the suicide prevention measures of Facebook



But the question remains: should Facebook change the way it monitors users for suicide risk?

"People Need to Know … They May Be Experimented" In 2011, Facebook partnered with the National Suicide Prevention Lifeline to initiate suicide prevention measures, including the ability to report suicide-related content to a friend posted on Facebook. The person who posted the content received an email from Facebook requesting that they contact the National Suicide Prevention Lifeline or chat with a crisis worker.

In 2017, Facebook expanded its anti-suicide prevention efforts to identify posts, videos and Facebook live streams with suicidal thoughts or content. This year, National Suicide Prevention Lifeline said it was proud to work with Facebook, and that the innovations of the social media company made it easier for people to access and access support.
"It's important that community members, whether online or offline, do not feel helpless when dangerous behavior occurs," said John Draper, director of the National Suicide Prevention Lifeline, in a 201
7 press release Facebook's approach is unique, and its tools enable community members to take an active role, report concerns and report concerns as needed. "
  Web Sites Become a Tool to Stop Veterans 'Suicide
In a blog post, Facebook described in detail how AI looks for patterns in posts or in comments that could contain evidence of suicide or hurt yourself. According to Facebook comments like "Are you alright?" and "Can I help?" may be an indicator of suicidal thoughts.

When an AI or other Facebook user marks a post, the company reviews it. If it is determined that the post must intervene immediately, Facebook can with first responders, z. For example, police forces work together to send help.

An Opinion Paper published Monday in Annals of Internal Medicine, however, claims that Facebook lacks transparency and ethics to validate users' contributions, identify those who appear to be at risk of suicide, and rescue services to draw attention to this risk.

The article argues that Facebook's suicide prevention efforts should be in line with the same standards and ethics as clinical research, e.g. For example, the verification by external experts and the consent of the persons included in the collected data.

Dr. John Torous, director of digital psychiatry at the Beth Israel Deaconess Medical Center's Department of Psychiatry in Boston, and Ian Barnett, assistant professor of biostatistics at the University of Pennsylvania's Perelman School of Medicine, co-authored the new paper.

"There is a need for discussion and transparency on innovation in mental health in general, and I think there is great potential for technologies to improve suicide prevention and improve overall mental health, but people need to Being aware of these things happen and in some ways they could be experimenting, "said Torous.

"We all agree that we want innovation in suicide prevention, we want new ways to reach people and help people, but we want it to be ethical, transparent and collaborative," he said claim that the average Facebook user may not even notice that this is happening. That's why they are not even informed. "

In 2014, Facebook researchers conducted a study on whether negative or positive content was displayed to users, users who made negative or positive reviews alleged they did not know it was ever carried out.

The Facebook researcher who designed the experiment, Adam DI Kramer, said in a post that the research was part of the effort to improve the service – not the Since then, Facebook has made further efforts to improve its service.

Last week, the company announced that it had partnered with experts to protect users from self-harm and suicide Suicide death story of a girl in the UK reportedly containing Instagr Am account worrying content about suicide. Facebook is the owner of Instagram.

"Experts in suicide prevention say that one of the best ways to prevent suicide is for people in need to hear from friends and family members who care about them. Facebook is in a unique position to help because of the friendships Antigone Davis, Facebook's global security chief, wrote Monday in an e-mail in response to questions about the new Opinion Paper, "We have people on our platform connecting with friends and organizations that can provide support." ,

"Experts agree that helping people as quickly as possible is so important, so we're using technologies to proactively detect content that could put people off suicidal thoughts, and we're determined to make our suicide prevention more transparent make efforts, "she said.

Facebook has also discovered that using proactive content detection technology in which someone expresses suicidal thoughts does not mean collecting health information. Technology does not measure the general risk of suicide for a person or anything about a person's mental health, she says.

What Health Experts Expect From Tech Companies

Arthur Caplan, Professor and Founding Director of the Bioethics Division at New York's Langone Health New York, applauded Facebook that he wanted to help with suicide prevention, but said that the new opinion paper is correct that Facebook must take extra steps to improve privacy and ethics.

"This is another area where private trading companies operate. We launch programs that aim to do good, but we are not sure how trustworthy they are or how confident they can be or willing to stay Keeping information collected, be it Facebook or someone else, "said Caplan, who was not involved in the paper.

"This leads us to the general question: are we keeping a close eye on big social media?" Even if they try to do something good, that does not mean they understand it correctly, "he said

  Like Facebook & # 39; likes & # 39; Prediction of race, religion and sexual orientation

Some technology companies – including Amazon and Google – are likely to have access to or may be likely to access large health data, David Magnus said. a professor of medicine and biomedical ethics at Stanford University, who was not involved in the new statement.

"All of these private entities, which are primarily not considered as health care facilities or facilities, may be able to have much health care information, especially with machine learning techniques," he said. "At the same time, they are almost completely outside the existing regulatory system that addresses these types of institutions."

For example, Magnus found that most tech companies fall outside the scope of the directive. "Common Rule" or the federal policy for the protection of human beings, which regulates human research.
  13 Simple Ways to Protect Your Family's Data

"This information they collect – and most importantly, if they can use machine learning to make predictions about health care and gain insight into these people – all are protected in the clinical area by things like HIPAA for anyone who receives their medical care through a so-called covered entity, "Magnus said.

"But Facebook is not a registered entity and Amazon is not a covered entity – Google is not a covered entity," he said. "Therefore, they do not have to meet the confidentiality requirements that apply to the way we approach health care information."

HIPAA, or the Health Insurance Portability and Accountability Act, requires the safety and confidentiality of a person's health information and, if necessary, discloses disclosure of that information.

The only privacy protection that social media users often enjoy is the agreements set forth in the company's policy papers that you use to sign or "click to agree" when you create your account, "Magnus said ,

"There's really something It's strange to essentially implement a public health screening program through these companies that are outside of those regulatory structures that we've talked about, and because they're outside of them, their research and the algorithms themselves are completely opaque, "he said.

"The problem is that all this is so secret"

There is still a problem that Facebook's efforts to prevent suicide do not adhere to the same ethical standards as medical research, according to Drs. Steven Schlozman, co-director of the Clay Center for Young Healthy Minds at the Massachusetts General Hospital, who was not involved in the new Opinion Paper.

"Theoretically, I would love it if we could sort it out." These data are captured by all these systems and used to better care for our patients, which would be great, but I do not want this to be a closed book process. that this is open to external regulators, I would like to give some form of informed consent, "said Schlozman.

"The problem is that all this is so mysterious on Facebook, and Facebook is a multi-million dollar, profit-driven company." The possibility that these data are collected and used for purposes other than the apparent blessing for which it appears is simply hard to ignore, "he said. It really feels like they are transgressing a lot from pre-determined ethical boundaries. "


Source link