The Hubris of Head Transplantation

As a rehabilitation physician with an interest in acute spinal cord injury, I try to keep abreast of neuroscience research both in animals and humans that might suggest a breakthrough in spinal cord injury recovery. Sadly, despite increased awareness by the general public from high-profile individuals who suffered this devastating injury (notably Christopher Reeve and his foundation), ongoing research in chemical, cellular transplant (including some stem-cell) and electrical stimulation, and advances in emergency medical and surgical management of the acute spinal injury, medical science has not seen dramatic improvement in spinal cord injury functional recovery since I began my practice almost 30 years ago. I spend some time reviewing the results of my patient’s Internet research into “claims of cures” as they desperately look for any solution to their disability. I thought I had seen everything until I was given a 2015 link to a TEDx talk by Italian neurosurgeon Dr. Sergio Canavero and saw a subsequent recent USA Today article regarding his plans for an imminent head transplant scheduled to occur in China sometime later this month or early next year. In fairness, 99.999% of the scientific news coverage condemns the planned surgery (including the TED community) and the popular news coverage consistently leads with a picture of Gene Wilder in his role in the Mel Brooks movie Young Frankenstein (as in this link)

In short, Dr. Canavero is planning to remove the head of a patient who has a severe progressive musculoskeletal disease and transplant it onto an otherwise healthy brain-dead individual who will act as the donor body. Canavero claims that unlike random high-energy trauma that destroys a significant section of the spinal cord as a result of an accident, his technique uses a precision cutting instrument that minimizes cord trauma, combines this with cryopreservation techniques that cool the head down to 12 degrees Celsius during the transplant, and uses a substance commonly used as a laxative called polyethylene glycol (PEG) to reconnect the spinal cord on the donor body followed by proprietary electrical stimulation of the donor body spinal cord to maximize recovery. Sounds pretty easy, right?

Ignoring the ethical issues for a moment, the main overarching technical problem is that the head transplant technique has yet to work when tried on any animals. Subsections of the technique have shown limited benefit such as using PEG to encourage spliced segments of spinal cord to heal. But a success (partial at best) in one small area never guarantees success on a broader application. Condemning this whole endeavor from an ethical standpoint is therefore a moot exercise. Recommending such a surgical procedure that has never been successful should be ethically abhorrent regardless of anyone’s worldview.

One final comment may be worth considering. If you watch the TEDx YouTube link of Dr. Canavero to the bitter end, he gives you a hint at what motivates his work. He makes the case for eventually perfecting his technique such that the human brain can become immortal. Transplanting a head onto a younger body (and repeating the process) effectively allows a head to live forever. He suggests connecting a head to a machine to achieve the same result. He is really talking about immortality of the human consciousness and actually refers to the brain as a filter for consciousness. It seems the pinnacle of human hubris to believe that we can achieve for ourselves immortality unless it were already available to us.

I suggest that John 6:47-51 offers a better way.

Uterine Transplantation – for Men?

Susan Haack began exploring the topic of uterine transplantation in women on this blog back in February 2014. In just under 4 short years, the technology has not only successfully resulted in live births in several women who received the uterine transplants, but outgoing president of the American Society of Reproductive Medicine, Dr. Richard Paulson, is suggesting we consider exploring the technique in men. While there are certainly hurdles to overcome (need for cesarean section for the actual birth, hormone supplementation, complicated nature of the transplant even for cisgender women), Dr. Paulson does not consider these barriers insurmountable for transgender women.

Dr. Julian Savulescu, professor of practical ethics at Oxford, has cautioned that initiating a pregnancy in a transgender woman may be unethical if it poses significant risk to the fetus. The above-linked article misquotes his concern as a concern over “any psychological harm to the child born in this atypical way”. The following is his actual quote from his own blog:

Therefore, although technically possible to perform the procedure, you would need to be very confident the uterus would function normally during pregnancy. The first US transplant had to be removed because of infection. There are concerns about insufficient blood flow in pregnancy and pre-eclampsia. A lot of research would need to be done not just on the transplant procedure but on the effect in pregnancy in non-human animals before it was trialled in humans. Immunosuppressives would be necessary which are risky. A surrogate uterus would be preferable from the future child’s perspective to a transplanted uterus. Uterine transplantation represents a real risk to the fetus, and therefore the future child. We ought to (other things being equal) avoid exposing future children to unnecessary significant risks of harm.

One putative benefit might be the psychological benefit to the future mother of carrying her own pregnancy. This would have to be weighed against any harm to the child of being born in this atypical way.

His concerns are the baseline medical risks involved in using a transplanted uterus to conceive a child regardless of the sex of the recipient. None of his concerns relate to the psychological harm to the child potentially caused by a uterine transplantation in a transgender woman as opposed to a cisgender woman. Savulescu is explicit in the beginning of his blog that “[t]he ethical issues of performing a womb transplant for a [sic] transgender women are substantially the same as the issues facing ciswomen.” Is the only risk to the child “born this atypical way” just the additional need for hormone supplementation in the transgender woman compared to the cisgender woman? Can we really know, a priori, what all of the attendant risks to the child really are with uterine transplantation in a transgender woman?

Regardless, let’s assume Savulescu is correct, that there is indeed no ethical difference between carrying a child to term via uterine transplantation between a cisgender woman and a transgender woman. There certainly can be no ethical difference between carrying a child to term via uterine transplantation between a transgender woman and a cisgender man. If the foregoing is true, can there be any ethical barrier preventing a man via uterine transplantation to use his sperm to fertilize a donor egg and carry his baby to term? After all, per Savulescu, all we need be concerned about from a bioethical standpoint are the technical issues/risks of uterine transplantation regardless of the recipient’s biological sex or self-identified gender.

In Genesis, God created two complimentary sexes and stated this difference was good. We are moving toward eliminating differences between the sexes and arguing that this is good. Both of us cannot be correct.

I wonder if Dr Haack thought that we would get this far down this particular bioethical slippery slope in four short years?

Is Your Polygenic Risk Score a Good Thing?

Back in October, Jon Holmlund wrote a blog entry regarding the popular company 23andMe and their collection of your health-related information along with your genetic material. I missed the significance of that relationship at the time. It took a recent article in Technology Review by my favorite technology writer Antonio Regalado to raise my ethical antennae. In his article, he explains the nexus of big data mining of genetic data and health information (such as is collected by 23andMe) and its future potential use to select embryos for IVF, selecting not only against polygenic diseases such as type 1 diabetes but potentially for non-diseases such as height, weight or even IQ.

Yikes.

Pre-implantation genetic diagnosis (PGD) already is used to select for particular embryos for IVF implantation that do not have genetic patterns such as cystic fibrosis or Down syndrome. Diseases that result from multiple genes (polygenic disorders) presently defy current PGD methods used to detect future diseases. Using Big Data analysis of health information compared against linked genetic data, scientists are getting better at accurate polygenic risk scores, statistical models which may more accurately ‘guess’ at an embryo’s future risk for not only juvenile diabetes but also later-in-life diseases (such as heart disease, ALS or glaucoma) or other less threatening inheritable traits (such as eye color, height or IQ) that result from multiple genes (and perhaps even environmental factors). There is confidence (hubris?) that with enough data and enough computing power, we can indeed accurately predict an embryo’s future health status and all of his or her inheritable traits. Combine that further with all of the marketing data available from Madison Avenue, and we can predict what type and color of car that embryo will buy when he or she is 35.

Ok, maybe not the color…

Seriously, companies such as Genomic Prediction would like to see IVF clinics eventually use their expanded statistical models to assist in PGD, using a proprietary technique they are calling Expanded Pre-implantation Genomic Testing (EPGT). Consider the following two quotes from Regalado’s article:

I remind my partners, “You know, if my parents had this test, I wouldn’t be here,” says [founding Genomic Prediction partner and type 1 diabetic Nathan] Treff, a prize-winning expert on diagnostic technology who is the author of more than 90 scientific papers.

For adults, risk scores [such as calculated by 23andMe] are little more than a novelty or a source of health advice they can ignore. But if the same information is generated about an embryo, it could lead to existential consequences: who will be born, and who stays in a laboratory freezer.

Regalado’s last comment is dead-on – literally. Who will be born and who stays in the freezer is another way of saying “who lives and who dies”.

Technologies such as EPGT are poised to take us further down the bioethical slope of choosing which of our children we want to live and which we choose to die. For the sake of driving this point home, let’s assume that the technology becomes essentially 100% accurate with regard to polygenic risk scoring and we can indeed determine which embryo will have any disease or trait. Since we already permit the use of single gene PGD to prevent certain genetic outcomes, should there be any limit to polygenic PGD? For instance:

(A) Should this technology be used to select against immediate life threatening illnesses only or also against immediate mentally or physically permanently crippling diseases that don’t cause death directly?

(B) Should this technology be used to select against later-in-life diseases that are life threatening at the time or also against mentally or physically crippling diseases that don’t cause death directly? (Would it make a difference if the disease occurred as a child, teenager or adult?)

(C) Should this technology be used to select against non-disease inheritable traits that society finds disadvantageous (use your imagination here)?

(D) Should this technology be used to select for inheritable traits that society finds advantageous (a slightly different question)?

Depending upon your worldview, until recently, answering Questions A through D used to be the purview of God or the random result of chance. Are we ready (and capable) to assume that responsibility? Make your decision as to where you would draw the line then review this short list of famous scientists and see how many on that short list your criteria would permit to be born.

Are you happy with that result? Would you call it good?

It would be nice to get this right since it now appears to be our call to make…

Is Medical Artificial Intelligence Ethically Neutral?

Will Knight has written several articles over this past year in MIT’s flagship journal Technology Review that have discussed growing concerns in the field of Artificial Intelligence (AI) that may be of concern for bioethicists. The first concern is in the area of bias. In an article entitled “Forget Killer Robots – Bias is the Real AI Danger”, Knight provides real world examples of this hidden bias affecting people negatively. One example is an AI system called COMPASS, which is used by judges to determine the likelihood of reoffending by inmates who are up for parole. An independent review claims that algorithm may be biased against minorities. In a separate article, Knight identified additional examples in other AI algorithms that introduced gender or minority bias in software used to rank teachers, approve bank loans and interpret natural language processing. None of these examples argued that this bias was introduced intentionally or maliciously (though that certainly could happen).

This is where Knight’s the second concern becomes apparent. The problem may be that the algorithms are too complex for even their programmers to retroactively examine for bias. To understand the complexity issue, one must have an introductory idea of how the current AI programs work. Previously, computer programs had their algorithms “hard-wired” so to speak. The programs were essentially complex “if this, then do this” sequences. A programmer could look at the code and generally understand how the program would react to a given input. Beginning in the 1980’s, programmers started experimenting with code written to behave like a brain neuron might behave. The goal of the program was to model a human neuron, including the ability of the neuron to change its output behavior in real time. A neurobiologist would recognize the programming pattern as modeling the many layers of neurons in the human brain. A biofeedback expert would recognize the programming pattern as including feedback to change the input sensitivities based upon certain output goals – “teaching” the program to recognize a face or image in a larger picture is one such example. If you want to dive deep here, begin with this link.

This type of programming had limited use in the 1980s because the computers were too simple and could only model simple neurons and only a limited number at one time. Fast forward to the 21st century and 30 years of Moore’s Law of exponential growth in computing power and complexity, and suddenly, these neural networks are modeling multiple layers with millions of neurons. The programs are starting to be useful in analyzing complex big data and finding patterns (literally, a needle in a haystack) and this is becoming useful in many fields, including medical diagnosis and patient management. The problem is that even the programmers cannot simply look at these programs and explain how the programs came to their conclusions.

Why is this important to consider from a bioethics standpoint? Historically, arguments in bioethics could generally be categorized as consequentialist, deontological, virtue, hedonistic, divine command, etc… One’s stated position was open to debate and analysis, and the ethical worldview was apparent. A proprietary, cloud-based, black-box, big data neural network system making a medical decision obscures, perhaps unintentionally, the ethics behind the decision. The “WHY” of a medical decision is as important as the “HOW”. What goes in to a medical decision often includes ethical weighting that ought to be as transparent as possible. These issues are presently not easily examined in AI decisions. The bioethics community therefore needs to be vigilant as more medical decisions begin to rely on AI. We should welcome AI as another tool in helping us provide good healthcare. Given the above concerns regarding AI bias and complexity, we should not however simply accept AI decisions as ethically neutral.

Dr. Smartphone

My brother tells me my doctoring days are done. We keep up a lively, ongoing email discussion of current technologies as they relate to topics such as big data analysis, Internet of Things (IoT), and smartphone technology. He recently challenged me that due to the rapid increase in computational power and sophistication of data analysis, smartphones will soon replace doctors as the main source of medical diagnosis. He is probably correct. But will my doctoring days be over?

Consider the linked article by Madhunita Murgia in The Financial Times (you may get one view of this unless you have a subscription or Google “Murgia smartphone”). Murgia lists a fairly exhaustive list of both current and looming smartphone apps and smartphone attachments that are – frankly – amazing. A fair number of these are backed by technical and clinical staffs, as well as massive computational clouds, that provide analysis of what you and your smartphone observe – about you. By keeping track of the number of calls you initiate, your movements (or change in that number), the quality of your voice, the results of heart EKG sensors on the phone cover and voluntary responses to personalized texts, your smartphone can instantly analyze your current physiology and often correctly identify any pathophysiology better than some (most?) physicians. Given the IoT technology trajectory, the smartphone will only get better and do so at a far faster and broader pace than the average MD during his or her limited 30 years of solo or group practice experience. It is the medical “Wisdom of the Crowd”, to borrow from a current TV show. Read the above linked article (if you can) just to get a glimpse of what is already here and what is quickly going to be available.

Security and privacy issues may be the one limiting factor in this technological progress. Consider one example of current technology in the area of smartphone post-partum depression diagnosis and management. A company called Ginger.io has developed a smartphone-based tool for patients to use to identify depression by a series of interactive text questions with a company coach as well as raw data from your phone checking the number and length of phone calls to friends, as well as an analysis of quality of your voice (no actual content – yet). Joseph Walker at the Wall Street Journal (subscription needed) offered one patient’s experience:

Tara Dye, who participated in Novant’s postpartum program, said she wasn’t aware of the extent to which her smartphone data was tracked. Ms. Dye says she was told the app would record her location and how far she traveled, but she didn’t realize that her behavior was being probed for a link to depression. She says she doesn’t mind the extent of the tracking, because it was in service of her health care, but she wishes there had been greater disclosure.

Finally, Andy Kessler, also at the Wall Street Journal, argues we are too worried about all this smartphone monitoring. He believes that eventually courts will rule that use of our personal data (heart beats, facial recognition, voice quality, etc…) will be determined to be a property rights issue and eventually companies such as Ginger.io and Apple will have to pay you for the use of your personal data. That may well be. But it does not currently put the “data horse” back in the privacy barn.

I am not sure where that leaves me or the other average doctors as smartphones eclipse us in diagnostic acumen. Like most technological advances during my practice lifetime, I have worked to embrace the ones that work and sifted through and discarded the ones that did not. Will there still be a place in the practice of medicine for one-on-one, patient-doctor relationships? An 87 year-old patient of mine recently commented to me that although I had not significantly improved her chronic back pain, she appreciated the time I took, the education I provided, and the reassurance that her problem was not more severe. Perhaps she was just being nice.

My brother would argue the smartphone would have been faster, cheaper and avoided the risk of her traveling to my office. I’m not sure. Let me ask Dr. Siri…

Is Obfuscation Ever Helpful in Science or Ethics?

Obfuscation and science would seem to be polar opposites. The scientific method hinges upon correctly identifying what one starts with, making a single known alteration in that starting point, and then accurately determining what one ends up with. Scientific knowledge results from this process. Accidental obfuscation in that three-step process necessarily limits the knowledge that could potentially be gleaned from the method. Peer review normally identifies and corrects any obfuscation. That is its job. Such peer review can be ruthless in the case of intentional obfuscation. It should be. There is never any place for intentionally misrepresenting the starting point, the methods or the results.

Until now?

In an excellent article in Technology Review, Antonio Regalado describes the current status of research where human embryonic stem cells “can be coaxed to self-assemble into structures resembling human embryos.” The gist of the article is that the scientists involved are excited and amazed by the stem cells’ ability to self-organize into structures that closely resemble many features of the human embryo. Perhaps more importantly, per Regalado:

“…research on real human embryos is dogged by abortion politics, restricted by funding laws, and limited to supplies from IVF clinics. Now, by growing embryoids instead, scientists see a way around such limits. They are already unleashing the full suite of modern laboratory tools—gene editing, optogenetics, high-speed microscopes—in ways that let them repeat an experiment hundreds of times or, with genetic wizardry, ask a thousand questions at once.”

This blog has reported on Synthetic Human Entities with Embryo-like Features (SHEEFs) before (see HERE and HERE for starters). The problem from a bioethical standpoint is this: is what we are experimenting upon human, and thus deserving protections as to the type of research permitted that we presently give to other human embryos? Answering that ethical question honestly and openly seems to be a necessary starting point.

Enter the obfuscation. Consider just the following three comments from some of the researchers in the article:

When the team published its findings in early August, they went mostly unnoticed. That is perhaps because the scientists carefully picked their words, straining to avoid comparisons to embryos. [One researcher] even took to using the term ‘asymmetric cyst’ to describe the [amniotic cavity-like structure] that had so surprised the team. “We have to be careful using the term synthetic human embryo, because some people are not happy about it,” says [University of Michigan professor and lab director Jianping] Fu.

“I think that they should design experiments to focus on specific questions, and not model everything,” says Insoo Hyun, professor and ethicist at Case Western University. “My proposal is, just don’t make the whole thing. One team can make the engine, another the wheels. The less ambiguous morally the thing is that you are making, the more likely you can do your research unimpeded.”

“When Shao presented the group’s work this year, he added to his slides an ethics statement outlined in a bright yellow box, saying the embryoids ‘do not have human organismal form or potential.’”

This last comment seems to contradict the very emphasis of the linked article. As Regalado nicely points out: “The whole point of the structures is the surprising, self-directed, even organismal way they develop.”

Honestly, at this point, most are struggling to understand whether or not the altered stem cells have human organismal form or potential. I suspect everyone thinks they must or else researchers would not be so excited to continue this research. The value of the research increases the closer a SHEEF gets to being human. If our techniques improve, at what point does a SHEEF have the right to develop as any other normal embryo? Said differently, given their potential, and particularly as our techniques improve, is it right to create a SHEEF to be just the engine or the wheel?

Having scientists carefully picking their words and straining to avoid comparisons is not what scientists should ever be doing. Doing so obfuscates both science and ethics. Does anyone really think that is a good thing?

Mental Health ERISA Law for Dummies

My son is an ERISA attorney whose present work requires him to make sure that large group insurance plans offered by companies comply with various federal statutes, such as the various regulations surrounding the PPACA (i.e. ObamaCare). In one of our recent discussions about healthcare access, he made me aware of some federal laws regarding the provision of mental health benefits, which I was heretofore completely ignorant. In my practice, I have frequently been frustrated by trying to get mental health care for some of my patients, some of whom appeared to have reasonable health insurance, which turned out to have rather minimal mental health coverage, a condition ERISA nerds refer to as lack of coverage parity between mental health benefits and covered medical and surgical benefits. This is thankfully changing. Without getting into the tedious minutia of ERISA law (and it is very tedious), let me take you on an abbreviated tour of these mental healthcare federal statutes.

Prior to 1996, coverage for mental health care was unambiguously less generous than for physical illness. In 1996, the Mental Health Parity Act (MHPA) passed, which required parity of annual and aggregate lifetime limits compared to med/surg benefits. The Mental Health Parity and Addiction Equity Act of 2008 (MHPAEA) expanded parity to include treatment limitations, financial issues such as co-pays, and in- and out-of-network coverage. However, and this was and continues to be a major “however”, neither of these federal statutes mandated any specific mental health coverage but simply required insurers who chose to provide mental health coverage to do so with parity with other medical and surgical benefits. I like to think of this like Title IX established parity between the sexes, MHPA and MHPAEA tried to establish parity for mental health with other medical coverage. If you want to get into the minutia, begin HERE and HERE.

With passage of the PPACA in 2010, both MHPA and MHPAEA suddenly developed some teeth. The PPACA mandated coverage of certain mental health and substance abuse disorders. Now the benefits for those covered services must have parity with other medical and surgical benefits. For a deeper dive, see HERE. But oddly, the sharpness of MHPA’s and MHPAEA’s new teeth varies by state. For technical reasons that only an ERISA attorney may understand, there remain variations in state-by-state interpretations of the coverage minimums of the PPACA 10 ‘required’ Essential Health Benefits (EHB), so see HERE for more details (particularly chart at bottom of linked article listing benefits by state).

Nonetheless, armed with these statutes, mental health advocates are demanding their parity. Recently, the 2nd US Circuit Court of Appeals allowed an ERISA lawsuit to proceed against a large health plan group administrator for their alleged reduction in mental health benefits for services provided to patients. Whether this encourages other legal challenges for more parity, time will tell.

What does foregoing mean from a bioethics standpoint? This blog has frequently discussed the problems of healthcare access for some our most vulnerable members of society, many of these related to mental health struggles. While I am no fan of the PPACA in general, this is one result that I applaud. More work needs to be done to determine exactly what mental health issues get covered and perhaps who gets to decide. Until we all behave like the Good Samaritan toward all of our neighbors, it may take statutes like these to nudge us along the way.

How paranoid should I be about my personal health care data privacy?

A recent Wall Street Journal article by Twila Brase suggests that anonymous medical data may not be so anonymous. This piqued my paranoia antenna. Her concern focuses on the new 21st Century Cures Act, which not only significantly increased funding for cancer research and opioid treatment programs, but also created “an ‘information commons’: a government-regulated pool of data accessible to all health researchers, regardless of background, training or motive.” The new law does not give patients any method for opting out of this data-sharing. It specifically prohibits what is called “information blocking” by health care providers, forcing hospitals and doctors to share information with government researchers.

But this is America – I thought anyone could opt-out of anything!

Granted, all of this information is anonymized so in theory no one can figure out that a particular unique kink in some strand of DNA is your unique kink. But as my favorite ESPN sportscaster, Lee Corso, says every weekend: “Not so fast my friend!” Big Data and the analysis of the same may have taught us (and is continuing to teach us) how to reverse the anonymization process. Latanya Sweeny and colleagues at Harvard were able to identify a majority of individuals in the Personal Genome Project by name, using limited demographics. MIT geneticist Yaniv Erlich and undergraduate student Melissa Gymreck were able to identify 50 people whose DNA was available online in free-access databases. Now granted, both of these groups of people are extremely intelligent, likely way smarter than your average computer hacker intent on just stealing and then selling your credit card numbers. Right?

Who would buy this information anyway?

Maybe your employer? Let me introduce you to the Preserving Employee Wellness Programs Act (perhaps appropriately known as PEWPA), an act making its way through the House of Representatives, that will side-step the privacy protections in the Genetic Information Nondiscrimination Act (GINA) by making genetic tests that are part of workplace wellness programs exempt from GINA’s privacy protections (future headline “PEWPA poo-poos GINA?” – sorry, couldn’t resist). While meant to reduce healthcare costs, individual employees could face thousands in healthcare costs if they refuse to share their DNA in company sponsored wellness programs.

So my healthcare may cost me more if I don’t share my private genetic information with my employer even though reassured the data will be stored anonymously? What could possibly go wrong?

If you have never watched the movie “Gattaca”, please turn off your computer right now and go watch it. It provides at least one example of what could possibly go wrong. Then come back and tell me why I should not be just a little paranoid about the way things are heading.

CRISPR and Identity

Dr. Joel Reynolds, a postdoctoral fellow at The Hastings Center recently wrote a very poignant essay in Time magazine arguing that our increasing ability to edit our own genetic code risks eventually eliminating the very genetic code that results in people like his younger brother Jason, who was born with muscle-eye-brain disease, resulting in muscular dystrophy, hydrocephalus, cerebral palsy, severe nearsightedness and intellectual disability. In answering his question – “What, precisely, are we editing for?” – he makes the case that editing the code that resulted in Jason effectively eliminates Jason. I encourage you to read the short article, as any further summary on my part does not do it justice.

How much change of my genetic code would alter my identity? This is an important ethical question as scientists seek to use our growing genetic knowledge to alter our genetic code. Using preimplantation genetic diagnosis (PGD) to eliminate diseased segments of genetic code also eliminates the rest of the genome since a completely disease-free human embryo is selected for implantation and the disease-carrying embryo is destroyed/killed. Obviously, the identity of the implanted embryo is completely different from the destroyed embryo. No identity preservation here.

CRISPR-Cas9 (CRISPR) is held out as the beginning of future techniques to successfully remove and replace sections of our human genetic code. Diseases that are caused by point mutations would seem to be ideal challenges for CRISPR, where removing a single nucleotide effectively cures the individual of the disease, and, at least on cursory consideration, leaves the identity of the individual intact (after all, we would only be changing one nucleotide in the individual’s 3.2 billion nucleotide sequence in their unique genetic code).

Color blindness is one such example, one that I “suffer” from. If my parents had the ability to change my genetic code just after conception to eliminate my color blindness, it seems that I would be the same man I am today, absent the need to have my wife select my ties and socks. However, other life experiences could have been available were my color vision intact, such as F-14 fighter pilot and/or completion of the NASA astronaut selection program, both requiring normal color vision. Likely, I would have been someone with the same identity but certainly with very different life experiences.

At the more serious end of diseases with point mutations is Tay-Sachs disease, where a defective enzyme fails to prevent the build up of toxic fatty deposits in the brain and spinal cord, and, in the infantile form, results in mental impairment, severe sensory pain and pre-mature death. If I had the infantile form of Tay-Sachs disease and my parents changed my genetic code, is my identity different? Am I just the same “me” experiencing a tremendously different life experience or am I a different “me”? If I am a different “me”, is it just because we hold cognitive ability/behavior/function critical to one’s identity? One can lose the function of the nerves in one’s leg and not consider this a challenge to one’s identity. Sustain an injury to one’s brain and the challenge to one’s identity is stronger. Dr Reynold’s makes a similar case in describing his brother Jason as he actually was, compared to how he could have been had the prayers for healing been answered or genetic editing been available. To paraphrase, a “corrected” Jason is no longer Jason.

None of the forgoing discussion considers the human soul as it relates to identity or whether alterations in the human genome affects the human soul (or vice versa?). Those issues will have to wait for another blog post. For now my question is this: How much of my genetic code can I change and still be me?

Is Involuntary Temporary Reversible Sterilization Always Wrong?

Ever since Janie Valentine’s blog post last week I have been thinking about the problem of repeat drug offenders and their children. My home state is also Tennessee so I read Judge Sam Benningfield’s order (to reduce prison sentences by 30 days for any drug offender willing to “consent” to voluntary temporary sterilization) with particular local and regional interest.

My office practice is on a street with more than one suboxone treatment clinic (a synthetic opioid designed to be used to assist in narcotic withdraw or as a substitute for pain management with less potential for abuse). It is not uncommon for me to see the parking lots of these clinics full of cars, with unsupervised children playing with other unsupervised children in the parking lot while their parents are inside the clinic receiving their treatment. No doubt some of these patients are opioid dependent and not necessarily opioid impaired. My point here is simply to point out the sheer volume of the opioid problem and also to highlight that this represents the families that are doing well. The children are still with their parents and the parents are not (obviously) under the jurisdiction of the court system.

One partner in my practice and his wife are foster parents and have opened their home to children of repeat drug offenders. These children have often been ordered by child protective services to be temporarily removed from their homes because of their parent’s incarceration related to a drug offense or court ordered treatment. The usual placement is a group of 2 or 3 siblings, often with one of the foster children a newborn baby in the throngs of opioid withdrawal. After seeing several iterations of this pattern, I can certainly sympathize with the judge’s moral outrage and frustration at seeing multiple children, often within the same family, born with opioid withdrawal, though I must agree with Janie Valentine and Steve Phillips that in the case of the judge’s court order (now rescinded), such consent is, at best, coerced given the incarceration.

This brings me to the point of today’s blog. Can there be any condition in which it is right to prevent repeat opioid drug offenders from conceiving a child while impaired by opioid addiction? No one will claim that conceiving a child while addicted to opioid drugs results in a desirable outcome for the parent or child. Choosing to avoid conception requires the very planning that opioid addiction frequently impairs. The current epidemic of opioid-addicted newborns proves that expecting voluntary conception avoidance by the opioid impaired is a non-starter. Voluntary reversible forms of sterilization (none are 100% successful at preventing conception) are available but have non-zero barriers (access/cost/side-effects/compliance/efficacy). Reducing the barriers for those willing to choose temporary sterilization seems reasonable. But what about individuals not willing to voluntarily avoid conception while opioid impaired? Does society have any right to temporarily (reversibly) prevent conception for some time frame in someone impaired by opioids? Should this happen after the first birth of an opioid-addicted newborn? Can it happen after the fourth such serial opioid-addicted newborn birth? At what point should autonomy of the opioid impaired yield to avoiding maleficence to a child?

Let me additionally be clear about what I am not asking or claiming. I am not making some eugenics claim that opioid impairment is genetically determined such that eliminating offspring of individuals suffering from opioid impairment somehow reduces the future risk of opioid dependency within the larger population. I am also not claiming that individuals who are currently opioid impaired will always be opioid impaired. I am not claiming that opioid impaired individuals are necessarily permanently bad parents; when not opioid addicted, they may in fact be wonderful parents. Finally, I am not asking that the sterilization be permanent, as I do not think that opioid impairment is permanent.

Again: Can there be any condition that makes it permissible to involuntarily temporarily reversibly sterilize repeat opioid drug offenders to avoid conceiving a child while opioid impaired?