Most of us probably know that Facebook keeps track of its users. Its programmers have created algorithms that can guess our preferences in all sorts of areas, even in politics. Most of us probably also know that Facebook has come under scrutiny for its actions (or non-actions) during the previous election cycle. Its founder, Mark Zuckerberg, has made appearances before Congress in order to try to explain his company’s behavior. At first he denied there was any problem, and then as evidence mounted, he began to acknowledge that Facebook should have done better in exercising oversight.
Recently, both The New York Times and The Washington Post have reported on Facebook’s approach to mental health, specifically as it relates to its users’ potential risk of committing suicide. Facebook programmers have created algorithms that have been used to monitor its users for potential suicide risks. In some situations, they have called authorities in order to address users whose online postings made them seem like they were in immediate danger.
Understandably, Facebook has resisted regulation. With a motto like “Move fast and break things,” it is easy to comprehend why it would not want any regulatory activity at all. However, mental health is a serious matter and the suicide rate in the United States is alarmingly high. This seems to be qualitatively different than Facebook knowing what kind of vacations I prefer or what kind of automobile I drive.
In his Washington Post op-ed, attorney Mason Marks writes:” Facebook is losing the trust of consumers and governments around the world, and if it mismanages suicide predictions, that trend could spiral out of control. Perhaps its predictions are accurate and effective. In that case, it has no reason to hide the algorithms from the medical community, which is also working hard to accurately predict suicide. Yes, the companies have a financial interest in protecting their intellectual property. But in a case as sensitive as suicide prediction, protecting your IP should not outweigh the public good that could be gained through transparency.”
There are several ethical issues at play here. Do Facebook users have any idea of what the company is doing with their information? Can a non-medical company be involved in the “practice of medicine” (Marks’ term) without any meaningful regulation? What should Facebook be allowed to do with the very personal mental health information that it gathers from its users?
Every generation wrestles with doing ethics in light of rapidly developing technology. In 2019 that conversation continues at even a quicker speed.