Citizenship, Surrogacy and the Power of ART

A recent LA Times article by Alene Tchekmedyian explores a complicated case involving birthright citizenship, surrogacy and same-sex marriage. Briefly, a California man, Andrew Banks, married an Israeli man, Elad Dvash, in 2010. At the time, same-sex marriage was not legal in the US leaving Elad unable to acquire a green card for residency (via the marriage) so the couple moved to Canada where Andrew has dual citizenship. While in Canada, the couple conceived twin boys, Aiden and Ethan, using assisted reproduction technology (ART) whereby eggs from an anonymous donor were fertilized by sperm from Elad and Andrew and then implanted within the womb of a female surrogate and carried to term. When the US Supreme Court struck down the federal law that denied benefits to legally married gay couples in 2013, Elad applied for and was granted his greed card. The present controversy occurred when Andrew and Elad applied for US passports for the twins. US State Department officials required detailed explanation of the boys’ conception, eventually requiring DNA tests which confirmed Aiden to be the biological son of Andrew and Ethan to be the biological son of Elad. Aiden was granted a US passport while Ethan was denied. The family has since traveled to the US (Elad with his green card and Ethan with his Canadian passport and temporary 6 month visa) where they are now suing the State Department for Ethan’s US birthright citizenship. They are arguing that the current applicable statute places them wrongly in the category of children born out of wedlock rather than recognizing their marriage, thus discriminating against them as a binational LGBTQ couple.

Birthright citizenship is a complicated legal arena and I am no lawyer. The US is even more complicated because we allow birthright citizenship to be conferred jus soli (right of the soil) in addition to jus sanguinis (right of blood). The twins were not born in the US so establishing “bloodline” is needed. The law specifies conditions where one parent is a US citizen and one is not a US citizen, and there is further differentiation depending on whether the children of the US citizen were born in or out of wedlock. They also vary depending on whether the US citizen is male or female, with the law more lenient (easier to acquire citizenship) for the child of a woman than of a man.

While the legal challenge here will almost certainly involve potential issues of discrimination of LGBTQ binational couples, the problem is really with the current legal definitions of parent as it relates to surrogacy in general. The State Department actually has a website dedicated answering questions related to foreign surrogacy and citizenship. The real issue is that the State Department relies upon genetic proof of parentage for foreign surrogacy births. In the present case, the surrogacy occurred outside the US, Elad is the genetic father of Ethan and Elad is not a US citizen; therefore Ethan is not a US citizen. While I’m deep in the weeds here, technically, Aiden and Ethan are not fraternal twins in the usual sense but rather half siblings (and this assumes that the donor eggs are from the same woman; otherwise the boys would be unrelated despite sharing the same pregnant womb through the magic of ART). Had Ethan been physically born via surrogacy in the US, he would have acquired his citizenship via jus soli (see US map for surrogacy friendly states near you).

This problem is just as confounding for heterosexual couples using foreign surrogates, and the problem is global. A more detailed technical legal discussion may be found here. A heterosexual couple using donor eggs and donor sperm and using a foreign third party surrogate would have exactly the same problem establishing US citizenship for “their” child. A similar problem would exist for an adopted embryo gestated in a foreign country by a foreign surrogate. If either the egg or the sperm of the US citizen is used for the surrogate birth, the child would be granted birthright citizenship.

The main difference for homosexual couples is that only one spouse can presently be the biological parent. I say “presently” because with ART it is theoretically possible (and may become actually possible in the future) to convert a human somatic cell into either a male sperm or a female egg. At that point, both spouses within a same-sex marriage could be the biological parents of their child. The present legal issue is not the result of a cultural prejudice against anyone’s sexuality but with the biological prejudice of sex itself. ART has the potential ability to blur the categories of sex as culture is now blurring the categories of gender. Should we consider this a good thing?

Given the present technological limits of ART, the simple issue of US citizenship could be resolved in all these cases if the US citizen parent simply adopted the child. Elad correctly points out that while adoption of Ethan by Andrew would grant Ethan US citizenship, it would not grant Ethan birthright citizenship, a necessary requirement for Ethan to someday run for US president. ART may be forcing us to look at changing our definition of parent but should it change our definition of biology? Ethan is the biological son of Elad. He is able to be the legally adopted son of Andrew and enjoy the benefits of US citizenship as currently does his half brother Aiden. He is not able to become the biological son of Andrew and enjoy the additional benefit of birthright citizenship via jus sanguinis.

Should we change the definition of birthright citizenship because ART is changing our definition of parent?

Will Medical Compliance Ever Become Non-Voluntary?

A recent article by Dr. Lisa Rosenbaum in the New England Journal of Medicine explored both the benefits and drawbacks of Digital Adherence Monitoring. The focus was on the FDA’s recent approval of Abilify MyCite, a medicine technology that combines the medication aripiprazole, used to treat various psychiatric diseases such as schizophrenia, certain features of bipolar disorder and depression, with a digital ingestion tracking system. This voluntary digital health feedback system (DHFS) works by having the patient wear a skin patch that is triggered when the pill contacts the acid in the stomach. This event is then recorded and tracked on the patient’s smartphone. The patient can then permit their caregivers and/or physicians to access the data via a web portal. The company responsible for the DHFS, Proteus, has shown improvement in patient’s systolic blood pressure using DHFS compared with standard care. The article primarily focuses on using the technology to help doctors work with their patients to determine the reasons for non-compliance.

While this presently voluntary technology obviously can track pill ingestion and this data can certainly help doctors and patients improve medication treatment adherence, I wondered about non-voluntary uses of the technology. This particular DHFS confirms that the prescribed pill was actually ingested regardless of what the patient or their caregiver may claim. Would an insurance company be permitted to have access to this data in exchange for payment for a particularly expensive medication? Could a government agency require such a system in exchange for providing coverage for a patient for a procedure whose subsequent outcome is improved with the use a given medication?

Dr. Rosenbaum offered in her article that she thought it unethical to withhold coronary artery bypass from one of her patients with whom she was fairly certain would not subsequently take the dual antiplatelet therapy post revascularization. Using a DHFS eliminates mere suspicion. Prematurely discontinuing of thienopyridine therapy (antiplatelet drugs such as Effient, Ticlid, or Plavix) after a similar cardiac stent placement has been shown to increase the risk of both re-hospitalization and death within the subsequent 12-month period. Given the success of the Proteus DHFS in reducing systemic high blood pressure, mandating this DHFS to monitor antiplatelet therapy immediately post cardiac stent placement should reduce both patient morbidity and mortality during the following 12-month period.

A consequentialist in charge of public health care expenditures might disagree with Dr. Rosenbaum regarding the ethics of providing a revascularization procedure in an individual who is poorly compliant with beneficial post-procedure medication compliance. Bluntly, why spend the money if the patient (for whatever reason) is going to act in a manner to reduce the benefit of her procedure? Thankfully, money is not the only healthcare utility worth measuring and economists are not yet fully in charge of healthcare delivery, though they appear to have an ever increasingly important seat at the table.

So, I think DHFS technologies such as Abilify MyCite will slowly become non-voluntary.

The Brain and The Internet

The current Technology Review contains an article by Adam Piore featuring Dr. Eric Leuthardt, who, as the title claims, is “The [Neuro]Surgeon Who Wants to Connect You to the Internet with a Brain Implant”. After spending Christmas with my married millennial children, I am convinced there are no further connections required. But Dr. Leuthardt isn’t satisfied with clumsy thumbs and smartphones – he wants a hard-wired, direct brain-to-Internet solution. The article nicely covers both the history and current “state-of-the-art” technology of brain-machine interfaces, as well as the barriers we have yet to solve before Dr. Leuthardt’s dream of a brain-internet connection is a reality. I encourage a full read of the entire article as backdrop to the questions I will focus upon for the remainder of this blog entry. Dr. Leuthardt’s research partner, Gerwin Schalk, a computer scientist focused on decrypting the vast volume of brain electrical signals from the current implants used, sets the stage with the following quote:

“What you really want is to be able to listen to the brain and talk to the brain in a way that the brain cannot distinguish from the way it communicates internally, and we can’t do that right now,” Schalk says. “We really don’t know how to do it at this point. But it’s also obvious to me that it is going to happen. And if and when that happens, our lives are going to change, and our lives are going to change in a way that is completely unprecedented.”

What would it mean for us to develop and implement a brain interface separate from our current physical senses of seeing, hearing, smelling, tasting and touching? What Schalk and Leuthardt want is to develop a brain interface that is as good at receiving sensory input as our current five senses and equally as good at affecting our physical environment as our current voice, arms and legs. But it doesn’t have to stop there (and in fact, I do not believe it would). If the brain cannot distinguish data input via these new artificial links from data input via “normal” physiology, why not insert novel visual, auditory, olfactory, tactile or motor information as well as linkages amongst these – the experiences of which become actual memories. How could one tell memories in which you had actually participated from ones that were virtual? Would it matter? Anyone had any trouble with unwanted Internet ads or computer viruses lately?

For the record, I am generally all-in for most replacement artificial body parts, such as heart, lung, skin, kidney, liver and limbs (allowing for the bioethical concerns generally voiced on this blog). I am admittedly concerned as we develop technologies that start accessing (and potentially augment or replace) portions of the human brain, as I think that this starts to tinker with an individual’s very sense of self – one’s identity. Does altering the brain’s manner of sensory processing potentially also alter the brain’s experience of sense of self? Until we answer that question, we should tinker extremely cautiously or perhaps not at all (I am presently favoring the latter).

Of course, all of this skirts around the larger issue of exactly where my sense of self lies. Does my brain completely contain and therefore solely determine my identity or is my identity part of a more complex interface between the physical brain and a non-physical soul? That is a big question for a six-paragraph blog to answer but one that deserves consideration as we seek to develop artificial interfaces within the brain that not only change the way I experience my environment but potentially how I experience my self.

With regard to hooking my brain directly to the Internet, given what I’ve seen of the Internet to date, I will leave my thumbs and smartphone as my interface of choice, at least for the near future.

The Hubris of Head Transplantation

As a rehabilitation physician with an interest in acute spinal cord injury, I try to keep abreast of neuroscience research both in animals and humans that might suggest a breakthrough in spinal cord injury recovery. Sadly, despite increased awareness by the general public from high-profile individuals who suffered this devastating injury (notably Christopher Reeve and his foundation), ongoing research in chemical, cellular transplant (including some stem-cell) and electrical stimulation, and advances in emergency medical and surgical management of the acute spinal injury, medical science has not seen dramatic improvement in spinal cord injury functional recovery since I began my practice almost 30 years ago. I spend some time reviewing the results of my patient’s Internet research into “claims of cures” as they desperately look for any solution to their disability. I thought I had seen everything until I was given a 2015 link to a TEDx talk by Italian neurosurgeon Dr. Sergio Canavero and saw a subsequent recent USA Today article regarding his plans for an imminent head transplant scheduled to occur in China sometime later this month or early next year. In fairness, 99.999% of the scientific news coverage condemns the planned surgery (including the TED community) and the popular news coverage consistently leads with a picture of Gene Wilder in his role in the Mel Brooks movie Young Frankenstein (as in this link)

In short, Dr. Canavero is planning to remove the head of a patient who has a severe progressive musculoskeletal disease and transplant it onto an otherwise healthy brain-dead individual who will act as the donor body. Canavero claims that unlike random high-energy trauma that destroys a significant section of the spinal cord as a result of an accident, his technique uses a precision cutting instrument that minimizes cord trauma, combines this with cryopreservation techniques that cool the head down to 12 degrees Celsius during the transplant, and uses a substance commonly used as a laxative called polyethylene glycol (PEG) to reconnect the spinal cord on the donor body followed by proprietary electrical stimulation of the donor body spinal cord to maximize recovery. Sounds pretty easy, right?

Ignoring the ethical issues for a moment, the main overarching technical problem is that the head transplant technique has yet to work when tried on any animals. Subsections of the technique have shown limited benefit such as using PEG to encourage spliced segments of spinal cord to heal. But a success (partial at best) in one small area never guarantees success on a broader application. Condemning this whole endeavor from an ethical standpoint is therefore a moot exercise. Recommending such a surgical procedure that has never been successful should be ethically abhorrent regardless of anyone’s worldview.

One final comment may be worth considering. If you watch the TEDx YouTube link of Dr. Canavero to the bitter end, he gives you a hint at what motivates his work. He makes the case for eventually perfecting his technique such that the human brain can become immortal. Transplanting a head onto a younger body (and repeating the process) effectively allows a head to live forever. He suggests connecting a head to a machine to achieve the same result. He is really talking about immortality of the human consciousness and actually refers to the brain as a filter for consciousness. It seems the pinnacle of human hubris to believe that we can achieve for ourselves immortality unless it were already available to us.

I suggest that John 6:47-51 offers a better way.

Uterine Transplantation – for Men?

Susan Haack began exploring the topic of uterine transplantation in women on this blog back in February 2014. In just under 4 short years, the technology has not only successfully resulted in live births in several women who received the uterine transplants, but outgoing president of the American Society of Reproductive Medicine, Dr. Richard Paulson, is suggesting we consider exploring the technique in men. While there are certainly hurdles to overcome (need for cesarean section for the actual birth, hormone supplementation, complicated nature of the transplant even for cisgender women), Dr. Paulson does not consider these barriers insurmountable for transgender women.

Dr. Julian Savulescu, professor of practical ethics at Oxford, has cautioned that initiating a pregnancy in a transgender woman may be unethical if it poses significant risk to the fetus. The above-linked article misquotes his concern as a concern over “any psychological harm to the child born in this atypical way”. The following is his actual quote from his own blog:

Therefore, although technically possible to perform the procedure, you would need to be very confident the uterus would function normally during pregnancy. The first US transplant had to be removed because of infection. There are concerns about insufficient blood flow in pregnancy and pre-eclampsia. A lot of research would need to be done not just on the transplant procedure but on the effect in pregnancy in non-human animals before it was trialled in humans. Immunosuppressives would be necessary which are risky. A surrogate uterus would be preferable from the future child’s perspective to a transplanted uterus. Uterine transplantation represents a real risk to the fetus, and therefore the future child. We ought to (other things being equal) avoid exposing future children to unnecessary significant risks of harm.

One putative benefit might be the psychological benefit to the future mother of carrying her own pregnancy. This would have to be weighed against any harm to the child of being born in this atypical way.

His concerns are the baseline medical risks involved in using a transplanted uterus to conceive a child regardless of the sex of the recipient. None of his concerns relate to the psychological harm to the child potentially caused by a uterine transplantation in a transgender woman as opposed to a cisgender woman. Savulescu is explicit in the beginning of his blog that “[t]he ethical issues of performing a womb transplant for a [sic] transgender women are substantially the same as the issues facing ciswomen.” Is the only risk to the child “born this atypical way” just the additional need for hormone supplementation in the transgender woman compared to the cisgender woman? Can we really know, a priori, what all of the attendant risks to the child really are with uterine transplantation in a transgender woman?

Regardless, let’s assume Savulescu is correct, that there is indeed no ethical difference between carrying a child to term via uterine transplantation between a cisgender woman and a transgender woman. There certainly can be no ethical difference between carrying a child to term via uterine transplantation between a transgender woman and a cisgender man. If the foregoing is true, can there be any ethical barrier preventing a man via uterine transplantation to use his sperm to fertilize a donor egg and carry his baby to term? After all, per Savulescu, all we need be concerned about from a bioethical standpoint are the technical issues/risks of uterine transplantation regardless of the recipient’s biological sex or self-identified gender.

In Genesis, God created two complimentary sexes and stated this difference was good. We are moving toward eliminating differences between the sexes and arguing that this is good. Both of us cannot be correct.

I wonder if Dr Haack thought that we would get this far down this particular bioethical slippery slope in four short years?

Is Your Polygenic Risk Score a Good Thing?

Back in October, Jon Holmlund wrote a blog entry regarding the popular company 23andMe and their collection of your health-related information along with your genetic material. I missed the significance of that relationship at the time. It took a recent article in Technology Review by my favorite technology writer Antonio Regalado to raise my ethical antennae. In his article, he explains the nexus of big data mining of genetic data and health information (such as is collected by 23andMe) and its future potential use to select embryos for IVF, selecting not only against polygenic diseases such as type 1 diabetes but potentially for non-diseases such as height, weight or even IQ.


Pre-implantation genetic diagnosis (PGD) already is used to select for particular embryos for IVF implantation that do not have genetic patterns such as cystic fibrosis or Down syndrome. Diseases that result from multiple genes (polygenic disorders) presently defy current PGD methods used to detect future diseases. Using Big Data analysis of health information compared against linked genetic data, scientists are getting better at accurate polygenic risk scores, statistical models which may more accurately ‘guess’ at an embryo’s future risk for not only juvenile diabetes but also later-in-life diseases (such as heart disease, ALS or glaucoma) or other less threatening inheritable traits (such as eye color, height or IQ) that result from multiple genes (and perhaps even environmental factors). There is confidence (hubris?) that with enough data and enough computing power, we can indeed accurately predict an embryo’s future health status and all of his or her inheritable traits. Combine that further with all of the marketing data available from Madison Avenue, and we can predict what type and color of car that embryo will buy when he or she is 35.

Ok, maybe not the color…

Seriously, companies such as Genomic Prediction would like to see IVF clinics eventually use their expanded statistical models to assist in PGD, using a proprietary technique they are calling Expanded Pre-implantation Genomic Testing (EPGT). Consider the following two quotes from Regalado’s article:

I remind my partners, “You know, if my parents had this test, I wouldn’t be here,” says [founding Genomic Prediction partner and type 1 diabetic Nathan] Treff, a prize-winning expert on diagnostic technology who is the author of more than 90 scientific papers.

For adults, risk scores [such as calculated by 23andMe] are little more than a novelty or a source of health advice they can ignore. But if the same information is generated about an embryo, it could lead to existential consequences: who will be born, and who stays in a laboratory freezer.

Regalado’s last comment is dead-on – literally. Who will be born and who stays in the freezer is another way of saying “who lives and who dies”.

Technologies such as EPGT are poised to take us further down the bioethical slope of choosing which of our children we want to live and which we choose to die. For the sake of driving this point home, let’s assume that the technology becomes essentially 100% accurate with regard to polygenic risk scoring and we can indeed determine which embryo will have any disease or trait. Since we already permit the use of single gene PGD to prevent certain genetic outcomes, should there be any limit to polygenic PGD? For instance:

(A) Should this technology be used to select against immediate life threatening illnesses only or also against immediate mentally or physically permanently crippling diseases that don’t cause death directly?

(B) Should this technology be used to select against later-in-life diseases that are life threatening at the time or also against mentally or physically crippling diseases that don’t cause death directly? (Would it make a difference if the disease occurred as a child, teenager or adult?)

(C) Should this technology be used to select against non-disease inheritable traits that society finds disadvantageous (use your imagination here)?

(D) Should this technology be used to select for inheritable traits that society finds advantageous (a slightly different question)?

Depending upon your worldview, until recently, answering Questions A through D used to be the purview of God or the random result of chance. Are we ready (and capable) to assume that responsibility? Make your decision as to where you would draw the line then review this short list of famous scientists and see how many on that short list your criteria would permit to be born.

Are you happy with that result? Would you call it good?

It would be nice to get this right since it now appears to be our call to make…

Is Medical Artificial Intelligence Ethically Neutral?

Will Knight has written several articles over this past year in MIT’s flagship journal Technology Review that have discussed growing concerns in the field of Artificial Intelligence (AI) that may be of concern for bioethicists. The first concern is in the area of bias. In an article entitled “Forget Killer Robots – Bias is the Real AI Danger”, Knight provides real world examples of this hidden bias affecting people negatively. One example is an AI system called COMPASS, which is used by judges to determine the likelihood of reoffending by inmates who are up for parole. An independent review claims that algorithm may be biased against minorities. In a separate article, Knight identified additional examples in other AI algorithms that introduced gender or minority bias in software used to rank teachers, approve bank loans and interpret natural language processing. None of these examples argued that this bias was introduced intentionally or maliciously (though that certainly could happen).

This is where Knight’s the second concern becomes apparent. The problem may be that the algorithms are too complex for even their programmers to retroactively examine for bias. To understand the complexity issue, one must have an introductory idea of how the current AI programs work. Previously, computer programs had their algorithms “hard-wired” so to speak. The programs were essentially complex “if this, then do this” sequences. A programmer could look at the code and generally understand how the program would react to a given input. Beginning in the 1980’s, programmers started experimenting with code written to behave like a brain neuron might behave. The goal of the program was to model a human neuron, including the ability of the neuron to change its output behavior in real time. A neurobiologist would recognize the programming pattern as modeling the many layers of neurons in the human brain. A biofeedback expert would recognize the programming pattern as including feedback to change the input sensitivities based upon certain output goals – “teaching” the program to recognize a face or image in a larger picture is one such example. If you want to dive deep here, begin with this link.

This type of programming had limited use in the 1980s because the computers were too simple and could only model simple neurons and only a limited number at one time. Fast forward to the 21st century and 30 years of Moore’s Law of exponential growth in computing power and complexity, and suddenly, these neural networks are modeling multiple layers with millions of neurons. The programs are starting to be useful in analyzing complex big data and finding patterns (literally, a needle in a haystack) and this is becoming useful in many fields, including medical diagnosis and patient management. The problem is that even the programmers cannot simply look at these programs and explain how the programs came to their conclusions.

Why is this important to consider from a bioethics standpoint? Historically, arguments in bioethics could generally be categorized as consequentialist, deontological, virtue, hedonistic, divine command, etc… One’s stated position was open to debate and analysis, and the ethical worldview was apparent. A proprietary, cloud-based, black-box, big data neural network system making a medical decision obscures, perhaps unintentionally, the ethics behind the decision. The “WHY” of a medical decision is as important as the “HOW”. What goes in to a medical decision often includes ethical weighting that ought to be as transparent as possible. These issues are presently not easily examined in AI decisions. The bioethics community therefore needs to be vigilant as more medical decisions begin to rely on AI. We should welcome AI as another tool in helping us provide good healthcare. Given the above concerns regarding AI bias and complexity, we should not however simply accept AI decisions as ethically neutral.

Dr. Smartphone

My brother tells me my doctoring days are done. We keep up a lively, ongoing email discussion of current technologies as they relate to topics such as big data analysis, Internet of Things (IoT), and smartphone technology. He recently challenged me that due to the rapid increase in computational power and sophistication of data analysis, smartphones will soon replace doctors as the main source of medical diagnosis. He is probably correct. But will my doctoring days be over?

Consider the linked article by Madhunita Murgia in The Financial Times (you may get one view of this unless you have a subscription or Google “Murgia smartphone”). Murgia lists a fairly exhaustive list of both current and looming smartphone apps and smartphone attachments that are – frankly – amazing. A fair number of these are backed by technical and clinical staffs, as well as massive computational clouds, that provide analysis of what you and your smartphone observe – about you. By keeping track of the number of calls you initiate, your movements (or change in that number), the quality of your voice, the results of heart EKG sensors on the phone cover and voluntary responses to personalized texts, your smartphone can instantly analyze your current physiology and often correctly identify any pathophysiology better than some (most?) physicians. Given the IoT technology trajectory, the smartphone will only get better and do so at a far faster and broader pace than the average MD during his or her limited 30 years of solo or group practice experience. It is the medical “Wisdom of the Crowd”, to borrow from a current TV show. Read the above linked article (if you can) just to get a glimpse of what is already here and what is quickly going to be available.

Security and privacy issues may be the one limiting factor in this technological progress. Consider one example of current technology in the area of smartphone post-partum depression diagnosis and management. A company called has developed a smartphone-based tool for patients to use to identify depression by a series of interactive text questions with a company coach as well as raw data from your phone checking the number and length of phone calls to friends, as well as an analysis of quality of your voice (no actual content – yet). Joseph Walker at the Wall Street Journal (subscription needed) offered one patient’s experience:

Tara Dye, who participated in Novant’s postpartum program, said she wasn’t aware of the extent to which her smartphone data was tracked. Ms. Dye says she was told the app would record her location and how far she traveled, but she didn’t realize that her behavior was being probed for a link to depression. She says she doesn’t mind the extent of the tracking, because it was in service of her health care, but she wishes there had been greater disclosure.

Finally, Andy Kessler, also at the Wall Street Journal, argues we are too worried about all this smartphone monitoring. He believes that eventually courts will rule that use of our personal data (heart beats, facial recognition, voice quality, etc…) will be determined to be a property rights issue and eventually companies such as and Apple will have to pay you for the use of your personal data. That may well be. But it does not currently put the “data horse” back in the privacy barn.

I am not sure where that leaves me or the other average doctors as smartphones eclipse us in diagnostic acumen. Like most technological advances during my practice lifetime, I have worked to embrace the ones that work and sifted through and discarded the ones that did not. Will there still be a place in the practice of medicine for one-on-one, patient-doctor relationships? An 87 year-old patient of mine recently commented to me that although I had not significantly improved her chronic back pain, she appreciated the time I took, the education I provided, and the reassurance that her problem was not more severe. Perhaps she was just being nice.

My brother would argue the smartphone would have been faster, cheaper and avoided the risk of her traveling to my office. I’m not sure. Let me ask Dr. Siri…

Is Obfuscation Ever Helpful in Science or Ethics?

Obfuscation and science would seem to be polar opposites. The scientific method hinges upon correctly identifying what one starts with, making a single known alteration in that starting point, and then accurately determining what one ends up with. Scientific knowledge results from this process. Accidental obfuscation in that three-step process necessarily limits the knowledge that could potentially be gleaned from the method. Peer review normally identifies and corrects any obfuscation. That is its job. Such peer review can be ruthless in the case of intentional obfuscation. It should be. There is never any place for intentionally misrepresenting the starting point, the methods or the results.

Until now?

In an excellent article in Technology Review, Antonio Regalado describes the current status of research where human embryonic stem cells “can be coaxed to self-assemble into structures resembling human embryos.” The gist of the article is that the scientists involved are excited and amazed by the stem cells’ ability to self-organize into structures that closely resemble many features of the human embryo. Perhaps more importantly, per Regalado:

“…research on real human embryos is dogged by abortion politics, restricted by funding laws, and limited to supplies from IVF clinics. Now, by growing embryoids instead, scientists see a way around such limits. They are already unleashing the full suite of modern laboratory tools—gene editing, optogenetics, high-speed microscopes—in ways that let them repeat an experiment hundreds of times or, with genetic wizardry, ask a thousand questions at once.”

This blog has reported on Synthetic Human Entities with Embryo-like Features (SHEEFs) before (see HERE and HERE for starters). The problem from a bioethical standpoint is this: is what we are experimenting upon human, and thus deserving protections as to the type of research permitted that we presently give to other human embryos? Answering that ethical question honestly and openly seems to be a necessary starting point.

Enter the obfuscation. Consider just the following three comments from some of the researchers in the article:

When the team published its findings in early August, they went mostly unnoticed. That is perhaps because the scientists carefully picked their words, straining to avoid comparisons to embryos. [One researcher] even took to using the term ‘asymmetric cyst’ to describe the [amniotic cavity-like structure] that had so surprised the team. “We have to be careful using the term synthetic human embryo, because some people are not happy about it,” says [University of Michigan professor and lab director Jianping] Fu.

“I think that they should design experiments to focus on specific questions, and not model everything,” says Insoo Hyun, professor and ethicist at Case Western University. “My proposal is, just don’t make the whole thing. One team can make the engine, another the wheels. The less ambiguous morally the thing is that you are making, the more likely you can do your research unimpeded.”

“When Shao presented the group’s work this year, he added to his slides an ethics statement outlined in a bright yellow box, saying the embryoids ‘do not have human organismal form or potential.’”

This last comment seems to contradict the very emphasis of the linked article. As Regalado nicely points out: “The whole point of the structures is the surprising, self-directed, even organismal way they develop.”

Honestly, at this point, most are struggling to understand whether or not the altered stem cells have human organismal form or potential. I suspect everyone thinks they must or else researchers would not be so excited to continue this research. The value of the research increases the closer a SHEEF gets to being human. If our techniques improve, at what point does a SHEEF have the right to develop as any other normal embryo? Said differently, given their potential, and particularly as our techniques improve, is it right to create a SHEEF to be just the engine or the wheel?

Having scientists carefully picking their words and straining to avoid comparisons is not what scientists should ever be doing. Doing so obfuscates both science and ethics. Does anyone really think that is a good thing?

Mental Health ERISA Law for Dummies

My son is an ERISA attorney whose present work requires him to make sure that large group insurance plans offered by companies comply with various federal statutes, such as the various regulations surrounding the PPACA (i.e. ObamaCare). In one of our recent discussions about healthcare access, he made me aware of some federal laws regarding the provision of mental health benefits, which I was heretofore completely ignorant. In my practice, I have frequently been frustrated by trying to get mental health care for some of my patients, some of whom appeared to have reasonable health insurance, which turned out to have rather minimal mental health coverage, a condition ERISA nerds refer to as lack of coverage parity between mental health benefits and covered medical and surgical benefits. This is thankfully changing. Without getting into the tedious minutia of ERISA law (and it is very tedious), let me take you on an abbreviated tour of these mental healthcare federal statutes.

Prior to 1996, coverage for mental health care was unambiguously less generous than for physical illness. In 1996, the Mental Health Parity Act (MHPA) passed, which required parity of annual and aggregate lifetime limits compared to med/surg benefits. The Mental Health Parity and Addiction Equity Act of 2008 (MHPAEA) expanded parity to include treatment limitations, financial issues such as co-pays, and in- and out-of-network coverage. However, and this was and continues to be a major “however”, neither of these federal statutes mandated any specific mental health coverage but simply required insurers who chose to provide mental health coverage to do so with parity with other medical and surgical benefits. I like to think of this like Title IX established parity between the sexes, MHPA and MHPAEA tried to establish parity for mental health with other medical coverage. If you want to get into the minutia, begin HERE and HERE.

With passage of the PPACA in 2010, both MHPA and MHPAEA suddenly developed some teeth. The PPACA mandated coverage of certain mental health and substance abuse disorders. Now the benefits for those covered services must have parity with other medical and surgical benefits. For a deeper dive, see HERE. But oddly, the sharpness of MHPA’s and MHPAEA’s new teeth varies by state. For technical reasons that only an ERISA attorney may understand, there remain variations in state-by-state interpretations of the coverage minimums of the PPACA 10 ‘required’ Essential Health Benefits (EHB), so see HERE for more details (particularly chart at bottom of linked article listing benefits by state).

Nonetheless, armed with these statutes, mental health advocates are demanding their parity. Recently, the 2nd US Circuit Court of Appeals allowed an ERISA lawsuit to proceed against a large health plan group administrator for their alleged reduction in mental health benefits for services provided to patients. Whether this encourages other legal challenges for more parity, time will tell.

What does foregoing mean from a bioethics standpoint? This blog has frequently discussed the problems of healthcare access for some our most vulnerable members of society, many of these related to mental health struggles. While I am no fan of the PPACA in general, this is one result that I applaud. More work needs to be done to determine exactly what mental health issues get covered and perhaps who gets to decide. Until we all behave like the Good Samaritan toward all of our neighbors, it may take statutes like these to nudge us along the way.