A safety concern with gene editing

Hat-tip to Dr. Joe Kelley for bring this to my attention…

As readers of this blog will recall, there is keen interest in exploiting recent discoveries in genetic engineering to “edit” disease-causing gene mutations and develop treatments for various diseases.  Initially, such treatments would likely use a patient’s own cells—removed from the body, edited to change the cells’ genes in a potentially therapeutic way, then return the altered cells to the patient’s bloodstream to find their way to the appropriate place and work to treat the disease.  How that would work could differ—make the cells do something they wouldn’t normally do, or make them do something better than they otherwise do (as in altering immune cells to treat cancer); or maybe make them work normally so that the normal function would replace the patient’s diseased function (as in altering blood cells for people with sickle cell anemia so that the altered cells make normal hemoglobin to replace the person’s diseased hemoglobin).

Or maybe we could even edit out a gene that causes disease (sickle cell anemia, Huntington’s disease) or increases the risk of disease (e.g., BRCA and cancer) so that future generations wouldn’t inherit it.  Or maybe we could edit genes to enhance certain health-promoting or other desirable qualities.

The recent scientific enthusiasm for gene editing is fueled by the discovery of the relatively slick and easy-to-use (if you’re a scientist, anyway) CRISPR-Cas9 system, which is a sort of immune system for bacteria but can be used to edit/alter genes in a lot of different kinds of cells.

It turns out that cells’ normal system to repair gene damage can and does thwart this, reducing the efficiency of the process.  The key component to this is something called p53, a critical protein that, if abnormal, may not do its repair job so well.  When that happens, the risk of cancer increases, often dramatically.  In cancer research, abnormal p53 is high on the list of culprits to look out for.

Two groups of scientists, one from the drug company Novartis and one from the Karolinska Institute in Sweden, have published on this.  P53’s thwarting of gene editing is particularly active in pluripotent stem cells, that are some, but not the only, candidate cells to be edited to create treatments.  These cells are also constituent cells of human embryos.  If the CRISPR-Cas9 process is used on these cells, p53 usually kills them off—unless it’s lacking or deficient, in which case it doesn’t, but also in which case it means that the altered cells could themselves become cancers, later on.

This is something that has to be monitored carefully in developing cells as medicines, so to speak, with genetic editing.  One does not want the patient to appear to be healed, only to develop a cancer, or a new cancer, later on.  One certainly would want to know the risk of that before editing an embryo—an unborn human, a future baby if placed in the right environment—to create a gene-edited human being.

Yet, as I’ve written here in the past, it appears that experimentation in heritable gene editing is pressing on.  I’ve argued, and continue to argue, that heritable human gene editing is a line that must not be crossed, that would place too much trust in the providence of the scientists/technologists who are the “actors” exerting power over fellow humans who become “subjects” in a deep sense of the term; that the risks to the subjects are undefinable; that it would enable perception of humans as “engineering projects”; that the gift of life would tend to be replaced by seeking to limit birth to “the people we want”; that the people acted upon are unable to provide consent or know what risks have been chosen for them by others, even before birth.  Rather than press ahead, we in the human race should exercise a “presumption to forbear.”

A counter argument is that, in limited cases where the genetic defect is limited and known, the disease is terrible, treatment alternatives are few or none, that the risks are worth it.  The recent papers seem to expose that line as a bit too facile.  How many embryos created (and destroyed) to develop the technique before “taking it live?”  Could we work things out in animals—monkeys, maybe?  How many generations to alter, create, and follow to be sure that a late risk—such as cancer—does not emerge?  Or maybe our animal rights sensibilities stop us from putting monkeys at such risk—maybe mice will do?

The new papers are dense science.  Frankly, I can grasp the topline story but have trouble digesting all the details.  More sophisticated readers will not be so impaired.  The news report, in the English of the general public, can be read here, the Novartis and Karolinska reports read (but not downloaded or printed) here and here, respectively.

More on genetic medicine

The third and final installment from The Code, a series of 3 short documentaries on the internet about the origins of genetic medicine, is entitled “Selling the Code.”  This is about genetic testing to try to predict risks of diseases, among other things.  Doctors use some of this testing in clinical care and a burgeoning amount of research.  A number of companies, such as 23andMe, will, for a (not-too-high) price, sequence your genes, or at least some of them, from a cheek swab sample you send, and then give you a report of what the results are and what they might mean.  In cases where there is a simple connection between a genetic abnormality and a disease—if you have the gene, you get the disease—the approach can be very helpful.  But it’s rarely simple.  Even for known cancer-propensity genes like BRCA1 and BRCA2, there are many variants, and what they mean clinically is far from fully known.  In fact, for most of the common disease we care about, like heart disease, diabetes, and most cancers, the story is complicated indeed.  So what to do with the information is often far from obvious, and careful genetic counseling by a physician who specializes in genetic medicine is a must.

23andMe ran afoul of FDA a couple of years ago, leading to a long process that resulted in FDA acceptance of a more limited menu of testing by the company.

And some companies will sell you “genetic information” for more trivial concerns—presuming to tell you something meaningful about what fitness regimen you should pursue, or what wine you’ll like.  Caveat emptor, I suppose, although the risks are low for some of this.

AND—companies like 23andMe keep anonymized data bases of the genetic information they get for and from their customers, and sell that information to drug companies to support the latters’ research.  An individual can’t be identified in the process (at least, not readily, see my January 2013 post about “DNA research and (non)anonymity”) but the data in the aggregate is valuable to the genetic sequencing company.

These kinds of concerns—particularly what to do with an individual’s information, but also the usefulness of having genetic data on a large group of people to understand disease and help discover new treatments—are germane to an ongoing project of the Hastings Center to assess the implications of genetic testing of the whole genomes of large numbers of babies, to screen for any of several dozen genetic diseases.   Again, most of the babies will be perfectly healthy, and the yield from screening for rare conditions is low.  But people arguably have a right to know about themselves, and parents to know about their newborns.  Yet still, to what end will we use information that we don’t fully understand?  Read a good Los Angeles Times article, that overlaps some of the points in The Code’s video, and provides other useful information in quick-and-easy form, here.

Finally, I was gratified to read that a project to synthesize an entire human genome in the laboratory is being scaled back, at least for now.  Apparently, they can’t raise enough money.  I bet would-be investors aren’t convinced they could own the results and guarantee a return on their money.  I fretted about this in May of 2016 and again in July of the same year.  I encourage readers to click through and read those, as well as the concerns raised by Drew Endy of Stanford and Laurie Zoloth of Northwestern, who criticized both the effort in concept and the closed-door, invitation-only meeting at Harvard to plan it.

That was two full years ago.  A lot is going on under our noses.

Belgian Euthanasia: Volunteers No Longer Necessary?

A recent resignation letter by one member of Belgium’s Euthanasia Commission suggests the slippery slope of who meets the criteria for legal euthanasia is becoming even more slippery. Dr. Ledo Vanopdenbosch sent his resignation letter to members of the Belgian Parliament who oversee the commission. His concern was with one of the main requirements of the law, which demands that the individual patient formally request euthanasia. Vanopdenbosch claims euthanasia occurred on a psychiatric patient without his or her request. His resignation has generated substantial concern not only because Vanopdenbosch is a committee member but also because he is considered a strong advocate of euthanasia. Here is the AP article in Voice of America with the details.

One of the main tasks of the Belgium Euthanasia Commission is to review every euthanasia case to make sure each case meets the legal criteria necessary for euthanasia. Any case in doubt is referred to the public prosecutor’s office. It is perhaps telling that in the last 15 years since legalization of euthanasia in Belgium, over 10,000 individuals have been euthanized but only one case has been referred to prosecutors by the commission with the concern that it may have been performed illegally. Vanopdenbosch argues that the commission is acting in place of the courts, a potential conflict of interest given that those on the commission are generally considered strong supporters of euthanasia. In addition to the slippery slope metaphor used earlier, one might also add that the foxes are guarding the henhouse.

An internal review of this particular case resulted in the committee claiming that what really happened was an accidental death related to palliative care rather than actual involuntary or non-voluntary euthanasia, as is claimed by Vanopdenbosch. The general population will never know, as commission protocol and privacy concerns prevent the details of the case from ever reaching the light of day. In absence of further details, one wonders whether the alleged palliative care for the unknown psychiatric condition was formally requested by an otherwise competent patient or just provided absent his or her formal consent but “in his or her best interest” by the patient’s physician or caregivers.

It is presently unknown whether or not Dr. Vanopdenbosch’s resignation will result in any changes in the structure, function or transparancy of Belgium’s Euthanasia Committee. At the very least, one would expect to see an increase in referrals to the public prosecutor’s office for legal oversight. It is simply unbelievable that the committee has only encountered one case out of 10,000 cases that they found sufficiently suspect to refer to prosecutors for legal review. Perhaps more importantly, I want to believe that even those supporting euthanasia would be against all forms of non-voluntary euthanasia, particularly involuntary euthanasia. Sadly, I am naive. In our post-modern world, how can any death be a “good death” unless, at the very least, the competent patient in question so stipulates?

(For an excellent recent YouTube interview containing a brief history of euthanasia, please see this link of an interview with Dr. Richard Weikart, Professor of History at California State University, Stanislaus. Some highlights: at 10:40 where he touches on Belgium and psychiatric euthanasia, at 19:00 where he discusses the slippery slope argument, and at 21:30 regarding non-voluntary euthanasia)

Parkland & Bioethics

I have lived in South Florida over 20 years now, and I do not remember anything grabbing and holding our community’s consciousness more than the February 14 shooting at the Stoneman Douglas High School in Parkland, Florida (in Broward County).  In its aftermath, the more we hear about the events of that day, the more alarming it becomes.  This is the sort of tragedy that haunts children in profound ways.  I have had conversations with my two teenage daughters about the relative safety of their schools, and what would happen if the formerly unthinkable occurred.

It’s hard to keep track of all the news coverage.  Certainly, there are many on all sides of the gun issue that engage in sensationalism and scare tactics.  Sadly, the voice of the so-called “reasonable middle” often is silenced by the loud voices on the fringes.  I sincerely, but mistakenly, thought that after the horrific shooting at Newtown, Connecticut in 2012 (with the deaths of 20 first graders and 6 adults) leaders would take meaningful action.

Is gun violence a bioethics issue? A research letter published in JAMA in early 2017 says as much.  After citing several powerful statistics, the authors write: “Compared with other leading causes of death, gun violence was associated with less funding and fewer publications than predicted based on mortality rate.”  The debate is over the impact of the Dickey Amendment, passed in 1996, which states that “none of the funds made available for injury prevention and control at the Centers for Disease Control and Prevention [CDC] may be used to advocate or promote gun control.”

Some Republicans in Congress say that the CDC is allowed to do research into gun violence even under the Dickey Amendment, but the evidence presented by Stark and Shah suggests that it is not being done.   Other Republicans have stated that the Dickey Amendment should be revisited. According to www.thehill.com, Rep. Bob Goodlatte from Virginia said that it would be appropriate for lawmakers to review the policy. He is quoted as saying, “I don’t think it’s inappropriate — particularly if the original author of that says it should be examined — to take a look at it . . . to see if there is a way to do that, to promote the cause, the core pursuit of the Centers for Disease Control, which is to prevent disease, not to address issues related to things that happen because someone has a disease like mental illness.”

Clearly, the subject of guns is controversial. Would CDC research into gun violence help affirm human dignity? Or, would the research be too politically biased to be of any value?   Might there be some valuable data gathered that could help address this most tragic of issues? This is a conversation worth having.

 

Reviewing the ethics of paying human research subjects

Sometimes it is both necessary and proper to pay a person to participate in a clinical trial, of a drug or some other medical intervention, or a data-collection study, or something else that involves people.  An article in this week’s New England Journal of Medicine reviews many of the relevant ethical issues.

A link to the article is here.  Correction to initial post:  subscription or purchase does appear required.

Why pay somebody to be in a trial?  The main reasons are to reimburse them for unavoidable expenses, to compensate them for time that would not otherwise be required in the course of standard medical care or normal life, and, indeed, to get them to participate in the first place.  In cancer medicine, where I’ve worked, the subjects are cancer patients who are generally not paid to participate; they usually are willing to do so in the hope of possible benefit, plus, often, a sense of altruism.  But most drugs have their first human testing in healthy volunteers, to begin to identify potential safety concerns and understand how, and how rapidly, the drug is eliminated from the body.  In those cases, the research subjects are almost always paid, sometimes substantially.

Such payments are not necessarily unethical, as long as they are not too big.  If they are, then they could create an undue influence to participate.  That would upset the balance of benefits and risks and compromise true informed consent.  By well-accepted ethical standards for research on human subjects—many of which are codified in regulation—the risks to human subjects must not be excessive, must be avoided or mitigated to the extent reasonably possible and commensurate with the goals of the research, and must not exceed the foreseeable benefits of the research, either to the individual subject or to society overall (e.g., in the form of important medical knowledge), or both.

Payment to a subject is not considered a benefit in and of itself, but should be “neutral” to the benefit/risk assessment.

There’s no hard and fast rule about paying subjects—no single standard “fee schedule,” so to speak.  Rather, each ethics board reviewing a study must also review and approve the amount and timing of payments to subjects.  Again, such payments should be high enough to respect the subject’s contribution to the research, but not too high so as to give them incentive to participate when maybe they should not.  Also, it’s a general principle that payment should be in installments; generally, no more than 10-15% of the total should be held back to the very end of the study.  Why this last point?  Because it’s also a principle that subjects can opt out of a study at any time, but if they think “I have to stay in to the bitter end to get paid,” that could pressure them too much.

Note, BTW, that such pressure is not the same as coercion, which by definition involves a threat, and does not apply to this payment question.

Also, payments must be appropriate so that subjects don’t get a wrong idea about the potential value or efficacy of an experimental drug, or that they might be induced to try to be in more than one study at once.  You might be surprised how significant that last risk is.  In my past IRB work, we just to worry about “professional subjects” who make some level of living by going from one research study to another.  More than one at once means getting two or more drugs at once that probably ought not to be combined, willy-nilly.

And of course, the potential for economic exploitation of low-income individuals must also be considered and respected.

The NEJM article really doesn’t break new ground but is a helpful review for those interested in essential research ethics.  The FDA has also provided guidance, which can be reviewed here.

Stem Cell Clinics & the FDA

When any business over-promises and under-delivers, it is well on its way to failure.   Does this principle also hold true in the world of stem-cells?  In the last few months the promise of stem cell treatment has met the reality of government oversight.

Does the government have the responsibility to rein in the larger-than-life claims of stem cell treatment clinics? In a letter dated August 24, 2017 to US Stem Cell of Sunrise, Florida, the FDA cited at least 14 failures relating to the facility’s compliance with federal regulations. It is a powerful letter that makes one wonder what is happening in some of these clinics throughout the country. US Stem Cell responded quickly, re-asserting their claim that they were simply treating consenting patients with their own cells and not subject to the same sorts of regulations that drug manufacturers are.

Is there a place for government oversight over stem cell clinics? At the very least, it could easily be argued that some of their claims are over-the-top and should be subject to false advertising laws.  Michael Joyce makes this point clearly.  He cites the concerns of stem cell researchers Paul Knoepfler and Jeanne Loring. Dr Loring puts it bluntly: “[Stem cell clinics] don’t want to talk to real scientists . . . Because 99 percent of them know they’re pulling the wool over people’s eyes. This is marketing, not science.”

Joyce makes an important point. There are real people with real physical problems who are turning to stem cell clinics as a last resort. If one buys a faulty product from the mall, one has the opportunity to return it for a refund. However, if one receives a faulty medical procedure, how can they be repaid for their loss? In these cases, shouldn’t stem cell clinics be held accountable for misleading the public?

The New York Times describes what happened to some unfortunate individuals who suffered at the hands of US Stem Cell: “The women had macular degeneration, an eye disease that causes vision loss, and they paid $5,000 each to receive stem-cell injections in 2015 . . . Staff members there used liposuction to suck fat out of the women’s bellies, and then extracted stem cells from the fat to inject into the women’s eyes.” They “suffered severe, permanent eye damage…”

Desperate people will try desperate things in order to receive desirable results. It is my opinion that the FDA is acting properly by providing at least a level of protection from those who would exploit the desperation of suffering people.

Selective data collection – what do we know about the risks of IVF?

A recent article in Newsweek reports on a physician, Dr. Jennifer Snyder, who is calling for the formation of a registry of egg donors to help determine the risks to women who “donate” eggs to other women undergoing IVF for monetary compensation. Her motivation in calling for this registry was the death of her daughter at age 31 from cancer after donated eggs on three occasions. She points out that egg donors are commonly told that there are no known long-term risks of egg donation, but that the reason that there are no known long-term risks is that the risks of egg donation have never been studied.

The article reports that Alan Penzias, chair of the practice committee at the American Society of Reproductive Medicine, agrees that such a registry is needed, and states that “national reporting on IVF, including data on both mothers and babies, is required by law.” It is good that this representative of those who practice reproductive medicine is in favor of a registry to assess the risks of egg donation, but there is a problem with his statement about the current reporting that is done on IVF in the United States.

That reporting on IVF is done under the authority of the CDC by the National Assisted Reproductive Technology Surveillance System. According to their National ART Surveillance website what they measure to comply with the Fertility Clinic Success Rate and Certification Act is data about patient demographics, patient stature: medical history, parental infertility diagnosis, clinical parameters of the ART procedure, and information regarding resultant pregnancies and births. The outcomes data is limited to information about the percentage of IVF cycles that achieve pregnancy and achieve live birth and information on how many of those pregnancies are single or multiple gestations and how many are delivered prematurely or at term. No data is collected on the complications or ill effects that women who undergo IVF may experience, and no data is collected on either birth defects or any other adverse consequences other than prematurity, birth weight, and plurality for the infants born by way of IVF.

Dr. Snyder’s call for a registry for data on adverse effects experienced by women who donate eggs is absolutely necessary to be able to give women the information needed to be able to make an informed decision about being an egg donor. There is an urgent need for the same type of registry of adverse outcomes for women who undergo IVF and the children produced by IVF. It is inexcusable to expect women to consent to these procedures without knowing the risks because those who perform the procedures have failed to collect data about those risks.

Is Obfuscation Ever Helpful in Science or Ethics?

Obfuscation and science would seem to be polar opposites. The scientific method hinges upon correctly identifying what one starts with, making a single known alteration in that starting point, and then accurately determining what one ends up with. Scientific knowledge results from this process. Accidental obfuscation in that three-step process necessarily limits the knowledge that could potentially be gleaned from the method. Peer review normally identifies and corrects any obfuscation. That is its job. Such peer review can be ruthless in the case of intentional obfuscation. It should be. There is never any place for intentionally misrepresenting the starting point, the methods or the results.

Until now?

In an excellent article in Technology Review, Antonio Regalado describes the current status of research where human embryonic stem cells “can be coaxed to self-assemble into structures resembling human embryos.” The gist of the article is that the scientists involved are excited and amazed by the stem cells’ ability to self-organize into structures that closely resemble many features of the human embryo. Perhaps more importantly, per Regalado:

“…research on real human embryos is dogged by abortion politics, restricted by funding laws, and limited to supplies from IVF clinics. Now, by growing embryoids instead, scientists see a way around such limits. They are already unleashing the full suite of modern laboratory tools—gene editing, optogenetics, high-speed microscopes—in ways that let them repeat an experiment hundreds of times or, with genetic wizardry, ask a thousand questions at once.”

This blog has reported on Synthetic Human Entities with Embryo-like Features (SHEEFs) before (see HERE and HERE for starters). The problem from a bioethical standpoint is this: is what we are experimenting upon human, and thus deserving protections as to the type of research permitted that we presently give to other human embryos? Answering that ethical question honestly and openly seems to be a necessary starting point.

Enter the obfuscation. Consider just the following three comments from some of the researchers in the article:

When the team published its findings in early August, they went mostly unnoticed. That is perhaps because the scientists carefully picked their words, straining to avoid comparisons to embryos. [One researcher] even took to using the term ‘asymmetric cyst’ to describe the [amniotic cavity-like structure] that had so surprised the team. “We have to be careful using the term synthetic human embryo, because some people are not happy about it,” says [University of Michigan professor and lab director Jianping] Fu.

“I think that they should design experiments to focus on specific questions, and not model everything,” says Insoo Hyun, professor and ethicist at Case Western University. “My proposal is, just don’t make the whole thing. One team can make the engine, another the wheels. The less ambiguous morally the thing is that you are making, the more likely you can do your research unimpeded.”

“When Shao presented the group’s work this year, he added to his slides an ethics statement outlined in a bright yellow box, saying the embryoids ‘do not have human organismal form or potential.’”

This last comment seems to contradict the very emphasis of the linked article. As Regalado nicely points out: “The whole point of the structures is the surprising, self-directed, even organismal way they develop.”

Honestly, at this point, most are struggling to understand whether or not the altered stem cells have human organismal form or potential. I suspect everyone thinks they must or else researchers would not be so excited to continue this research. The value of the research increases the closer a SHEEF gets to being human. If our techniques improve, at what point does a SHEEF have the right to develop as any other normal embryo? Said differently, given their potential, and particularly as our techniques improve, is it right to create a SHEEF to be just the engine or the wheel?

Having scientists carefully picking their words and straining to avoid comparisons is not what scientists should ever be doing. Doing so obfuscates both science and ethics. Does anyone really think that is a good thing?

Fetal tissue and commerce

You may have seen in the general press that Indiana University is asking a federal judge to declare unconstitutional that state’s law banning research on the remains of aborted fetuses.  I noticed an article in the Wall Street Journal (subscription required).  An open-access account can be found here.

I oppose abortion, but I can imagine for the sake of argument that, if one allows for abortion, that it might be claimed that the tissue of an aborted unborn human could ethically be donated for research.  It seems to me that such an argument would construe this donation to be similar to donation of organs for transplantation.  In this case, the mother would be speaking for her (newly-deceased) unborn to make the decision, since the aborted one would not have decision-making capacity.

For such an action to be remotely ethical, donation of tissue could not in any way influence the decision to have an abortion–as, indeed, federal restrictions on fetal tissue research currently require.  There should be no profit to the donor or the abortion provider in the process.  In light of the Planned Parenthood brouhaha over this subject, I might suggest that the researchers seeking the tissue for research be required to bear any costs for the preparation of the tissue.  And something like the dead donor rule for organ transplantation would have to apply.  But that’s probably a trivial point in this case.  Never mind that the dead donor rule itself is under attack these days.

I imagine it’s clear that I don’t find this argument very persuasive.  For one, in organ donation, assuming the dead donor rule applies, one is not killing the donor on purpose, as is the case in abortion.  (Then again, maybe I speak too soon.  Maybe as euthanasia advances we will see it practiced explicitly to facilitate organ harvesting.  But I don’t want to believe that will get much traction.)

For another, scientists should seek alternate approaches to their research.  If we afforded unborn humans the same protections generally afforded to human research subjects, seeking such alternatives would be unescapable.  But in a time when it is far from agreed that we should not create human embryos solely for the purpose of medical research, extending protections to cover the new being in a pregnancy would appear a stretch for us.

Whatever the law allows, it is hard to square respect for human life with performing research on electively aborted babies, no matter how “important” the research appears.

The WSJ report says that other parts of Indiana’s law have been blocked in court, including a ban on abortions because prenatal diagnosis has detected Down syndrome–part of my subject last week.

Two other points about this case that I find painfully telling about how our society reflexively thinks of human life.  First: Indiana University’s key argument against the Indiana law is that it blocks commerce, it violates the commerce clause of the Constitution.  The argument is that aborted fetal tissue is an “article of commerce,” similar to–and these are the precedents being cited– margarine, or meat slaughtered more than 100 miles from the point of sale.

Second:  the university contends that the law does not advance a legitimate public interest.  All it does is express “moral disdain for abortion.”  So: the protection of unborn human life is not a legitimate public interest.  What other human life lies outside the public interest, I wonder?

Hmmm….

 

Is Involuntary Temporary Reversible Sterilization Always Wrong?

Ever since Janie Valentine’s blog post last week I have been thinking about the problem of repeat drug offenders and their children. My home state is also Tennessee so I read Judge Sam Benningfield’s order (to reduce prison sentences by 30 days for any drug offender willing to “consent” to voluntary temporary sterilization) with particular local and regional interest.

My office practice is on a street with more than one suboxone treatment clinic (a synthetic opioid designed to be used to assist in narcotic withdraw or as a substitute for pain management with less potential for abuse). It is not uncommon for me to see the parking lots of these clinics full of cars, with unsupervised children playing with other unsupervised children in the parking lot while their parents are inside the clinic receiving their treatment. No doubt some of these patients are opioid dependent and not necessarily opioid impaired. My point here is simply to point out the sheer volume of the opioid problem and also to highlight that this represents the families that are doing well. The children are still with their parents and the parents are not (obviously) under the jurisdiction of the court system.

One partner in my practice and his wife are foster parents and have opened their home to children of repeat drug offenders. These children have often been ordered by child protective services to be temporarily removed from their homes because of their parent’s incarceration related to a drug offense or court ordered treatment. The usual placement is a group of 2 or 3 siblings, often with one of the foster children a newborn baby in the throngs of opioid withdrawal. After seeing several iterations of this pattern, I can certainly sympathize with the judge’s moral outrage and frustration at seeing multiple children, often within the same family, born with opioid withdrawal, though I must agree with Janie Valentine and Steve Phillips that in the case of the judge’s court order (now rescinded), such consent is, at best, coerced given the incarceration.

This brings me to the point of today’s blog. Can there be any condition in which it is right to prevent repeat opioid drug offenders from conceiving a child while impaired by opioid addiction? No one will claim that conceiving a child while addicted to opioid drugs results in a desirable outcome for the parent or child. Choosing to avoid conception requires the very planning that opioid addiction frequently impairs. The current epidemic of opioid-addicted newborns proves that expecting voluntary conception avoidance by the opioid impaired is a non-starter. Voluntary reversible forms of sterilization (none are 100% successful at preventing conception) are available but have non-zero barriers (access/cost/side-effects/compliance/efficacy). Reducing the barriers for those willing to choose temporary sterilization seems reasonable. But what about individuals not willing to voluntarily avoid conception while opioid impaired? Does society have any right to temporarily (reversibly) prevent conception for some time frame in someone impaired by opioids? Should this happen after the first birth of an opioid-addicted newborn? Can it happen after the fourth such serial opioid-addicted newborn birth? At what point should autonomy of the opioid impaired yield to avoiding maleficence to a child?

Let me additionally be clear about what I am not asking or claiming. I am not making some eugenics claim that opioid impairment is genetically determined such that eliminating offspring of individuals suffering from opioid impairment somehow reduces the future risk of opioid dependency within the larger population. I am also not claiming that individuals who are currently opioid impaired will always be opioid impaired. I am not claiming that opioid impaired individuals are necessarily permanently bad parents; when not opioid addicted, they may in fact be wonderful parents. Finally, I am not asking that the sterilization be permanent, as I do not think that opioid impairment is permanent.

Again: Can there be any condition that makes it permissible to involuntarily temporarily reversibly sterilize repeat opioid drug offenders to avoid conceiving a child while opioid impaired?