Project Nightingale

Technology always seems to outrun ethics.  But just because something can be done does not mean that it should be done.  Usually this discussion is focused on the latest life-sustaining medical device, but with emergence of electronic medical records, a whole new set of problems have appeared

Last month, Rob Copeland of The Wall Street Journal (behind pay wall) published a report focusing on the partnership between Google and Ascension health subtitled:  “Search giant is amassing health records from Ascension facilities in 21 states; patients not yet informed.”  Google has named their efforts “Project Nightingale.”

The idea behind medical records being available on the cloud (or somewhere electronically) sounds very appealing at first.  However, it is not difficult to imagine “Project Nightingale” turning into “Project Nightmare” when it comes to patient’s privacy.  This was what drove a Google whistleblower to come forward: “why was the information being handed over in a form that had not been ‘de-identified’ – the term the industry uses for removing all personal details so that a patient’s medical record could not be directly linked back to them?  And why had no patients and doctors been told what was happening?”

Dr. David Feinberg, the head of Google Health, addresses the concerns that have been raised: “Google has spent two decades on similar problems for consumers, building products such as Search, Translate and Gmail, and we believe we can adapt our technology to help. That’s why we’re building an intelligent suite of tools to help doctors, nurses, and other providers take better care of patients, leveraging our expertise in organizing information.”  (I’m not sure using Google Translate as a positive example will bring comfort to many readers.) Feinberg also discusses the precautions that Google has put into place.

Since the original WSJ broke last month, Congress has gotten involved.  Consumer Affairs reports “that the U.S. Department of Health and Human Services has opened an inquiry into the project to determine whether it violates the Health Insurance Portability and Accountability Act of 1996 (known as HIPAA).”

Maintaining patients’ privacy is an important issue and must not be glibly overlooked.  It will no longer suffice simply to say that we trust Google (or Facebook, etc.) to do the right thing.

Big brother is a health care provider

Big brother may be watching you — no, not the one you’re thinking of, but one a lot closer to you: your local hospital or health care system.

Some hospital systems are now obtaining credit card transaction data about local populations, plugging the data into algorithms that identify those people most likely to get sick, then trying to intervene to prevent them from getting sick or having to go to an emergency department.

Some examples of data they might find useful: whether you have a gym membership, whether you shop at plus-sized stores, whether you buy cigarettes, whether you own a car, how many people live in your home, your food shopping and dining-out habits, your annual income.

Of course, marketers have been using such data for years to try to sell us things. The hospitals systems’ use of data is, at least ostensibly, for a higher purpose: as we move towards a medical system that pays providers for meeting quality benchmarks and penalizes them for not meeting certain quality benchmarks, it is not only in patients’ interests but in health care providers’ financial interest to intervene early and try to prevent people from having expensive complications. But something feels not-quite-right about hospitals using the same tactics Wal-Mart or Amazon uses.

I suppose this development is reflective of many current trends: the disappearance of privacy in our connected world, the medicalization of all of life, the development of the “Brave New Biocracy” of which Ivan Illich wrote, the shift in emphasis from individual to population health, the computer generation’s fascination with data (to a person with a hammer, everything looks like a nail; to a person with a computer, everyone looks like data), the skyrocketing costs of techno-medicine, and our elevation of health care to the highest good, before which all other considerations fade.

Our conviction that better health care is an absolute good means that when it comes to anything that looks like it could make health care better — whether it be inserting computer screens between doctors and patients, cloning humans and then destroying them to benefit other humans, or prying Big Brother-like into the intimate details of our lives to find ways we are not conforming to the latest dogma about what’s good for us — we ask, Can we? but never get around to the more important questions: Should we? And if we do, where might this lead?