Brain resuscitation (?) in pigs

By Jon Holmlund

The latest mind-blowing (seriously, no pun intended) report from the science literature is that a team of scientists at Yale Medical School have been able to use an artificial preservative solution to recover electrical activity in some of the cells of the brains from the severed heads of pigs that had been slaughtered for food.  This is absolutely stunning because the understanding—so widely accepted that the term “conventional wisdom” is trite in this case—that the brain’s need for oxygen, nutrients, and the blood flow that provides them is so massive, so constant that an interruption of even a few minutes means irreversible death of brain tissue.  This can be in part of the brain (as in a stroke), or the whole brain (as in brain death).  Your correspondent is not a neuroscientist, but understands that recent research is showing the human brain, anyway, to be more adaptable than historically understood, meaning that after an event like a stroke, function can be restored over time with rehabilitative efforts that support the remaining, undamaged brain tissue adapting to the damage.

In this case, it was four hours after the pigs’ deaths that the researchers isolated their brains and put them into the solutions.  Besides the electrical activity in some nerve cells, the researchers also found evidence that blood vessels could support circulation, and that there was metabolic (energy-using) activity in the isolated brains.  Evidence that the whole brain was working, and able to, for example, “feel” pain or detect stimuli, was not evident, but the researchers were not trying to do that.  Their immediate goal was apparently to understand how long brain cell function might be preserved.

Before we rush to invoke the immortal Viktor Frankenstein, it should be said that the researchers in this case appear to have carefully followed existing ethical guidelines for the research use of animals.  And it is tempting to speculate about this work leading to new treatments for brain injury.

Still, many ethical issues are raised.  What constraints should proper ethics of experimentation on animals put on future, similar experiments?  Is it acceptable to pursue a model for whole animal or even human brains preserved outside the body to study preservation and restoration of function, perhaps even to the point of trying to “jump start” the whole brain, as the current researchers speculate might be necessary.  Or, such a recovery might be impossible; they say they might just be observing an evitable process of brain death and decay.  Maybe it takes rather longer than previously appreciated.

That last point raises further concerns about how we understand when death has occurred.  Do current approaches toward harvesting human organs for transplantation, that may require that blood flow to the brain be interrupted for only a matter of minutes before declaring death of the donor, effectively jump the gun?  Might some people who are thought brain dead in fact have better chance of recovery than appreciated?  These questions already trouble ethicists thinking about how to determine when a person has died.

These are only a few of the concerns, and some authors this week are calling for an international review of the ethics of this work, before proceeding further with research on mammals—never mind humans, that’s not in view, yet.

A summary of the work for the non-specialist is openly available.  Summaries of related ethical issues, also openly available, can be found here and here.  The full scientific report in Nature requires subscription or purchase.

Are AI Ethics Unique to AI?

By Mark McQuain

A recent article in Forbes.com by Cansu Canca entitled “A New Model for AI Ethics in R&D” has me wondering whether the ethics needed for the field of Artificial Intelligence (AI) requires some new method or model of thinking about the bioethics related to that discipline. The author, a principal in the consulting company AI Ethics Lab, implies that there might be. She believes that the traditional “Ethics Oversight and Compliance Review Boards”, which emerged as a response to the biomedical scandals of World War II and continue in her view to emphasize a heavy-handed, top-down, authoritative control over ethical decisions in biomedical research, leave AI researchers effectively out-of-the-ethical-decision-making loop.

In support of her argument, she cites the recent working document of AI Ethics Guidelines by the European Commission’s High-Level Expert Group on Artificial Intelligence (AI HLEG). AI HLEG essentially distilled their AI ethical guidelines down to the familiar: Respect for Autonomy, Beneficence, Non-Maleficence, and Justice, as well as one new principle: Explicability. She downplays Explicability as simply the means to realize the other four principles. I think the demand for Explicability is interesting in its own right and will comment on that below.

Canca sees the AI HLEG guidelines as simply a rehash of the same principles of bioethics available to current bioethics review boards, which, in her view, are limited in that they provide no guidance for such a board when one principle conflicts with another. She is also frustrated that the ethical path researchers are permitted continues to be determined by an external governing board, implying that “researchers cannot be trusted and…focuses solely on blocking what the boards consider to be unethical.” She wants a more collaborative interaction between researchers and ethicists (and presumably a review board) and outlines how her company would go about achieving that end.

Faulting the “Principles of Biomedical Ethics” for failing to be determinant on how to resolve conflicts between the four principles is certainly not a problem unique to AI. In fact, Beauchamp and Childress repeatedly explicitly pointed out that the principles cannot be independently determinant on these types of inter-principle conflicts. This applies to every field in biomedical ethics.

Having an authoritative, separate ethical review board was indeed developed, at least in part, because at least some individual biomedical researchers in the past were untrustworthy. Some still are. We have no further to look than the recent Chinese researcher He Jiankui, who allegedly created and brought to term the first genetically edited twins. Even top-down, authoritative oversight failed here.

I do think Canca is correct in trying to educate both the researchers and their companies about bioethics in general and any specific bioethical issues involved in a particular research effort. Any effort to openly identify bioethical issues and frankly discuss potential bioethical conflicts at the outset should be encouraged.

Finally, the issue of Explicability related to AI has come up in this blog previously. Using the example of programming a driverless car, we want to know, explicitly, how the AI controlling that car is going to make decisions, particularly if it must decide how to steer the car in a no-win situation that will result in the death of either occupants inside the car or bystanders on the street. What we are really asking is: “What ethical parameters/decisions/guidelines were used by the programmers to decide who lives and who dies?” I imagine we want this spelled-out explicitly in AI because, by their nature, AI systems are so complex that the man on the Clapham omnibus (as well as the bioethicist sitting next to him) has no ability to determine these insights independently.

Come to think about it, Explicability should also be demanded in non-AI bioethical decision-making for much the same reason.

The Importance of Bioethics

Every time I am about to stand in front of a fresh batch of students in my undergrad Bioethics class, I am moved to ask myself the question: what’s the point in Bioethics? The reason I ask this question is because it is an important question.

It is important because the asking encourages the essential exercise of me remembering “the why.” Why am I doing this? Why do we reflect on the moral permissibility of certain behaviors (culturally accepted or not)? Why do we allow certain things to happen and not others as individuals and a culture? Who should be allowed to decide what is right and what is wrong?

This “why” speaks to the very heart of bioethics – to the point of bioethics, which, I humbly submit, is to think about health and medical issues, and their moral permissibility, with the goal of supporting human flourishing and dignity. Bioethics is the exercise of asking these questions before it becomes a hindsight question. You know, when we ask: what else could I have done, it seemed like the only choice?

The further we drift from asking these questions the less it seems like there is a choice. Because, the day to day is where these decisions happen. The tech in the lab creating a family (am I playing God?). The nurse sitting by the bedside watching a man agonizing through his last breaths (he shouldn’t have to suffer through this, what can I do?). The engineer who is trying to solve this simple problem (I don’t think this could be used to hurt someone, could it?). The mother who is watching her 6-year-old slowly die of cancer (wouldn’t it be easier if I could help him die?). The expecting mom who has been abandoned by her boyfriend to go it alone (what choice do I have?). The biological boy who identifies as a female and is sorting through pronouns to find the right fit for the moment (why do I have to go through this?).

Bioethics is important because it asks the question before the moment. In the moment, the decision seems like it has already been made.

Human brain genes in monkeys

By Jon Holmlund

This week’s news is that a group of Chinese researchers have birthed and studied a small number of rhesus monkeys that were “transgenic” for a human gene associated with brain development.  In this work, monkey eggs (oocytes) were altered by adding the human form of a gene that is believed important to the development of the brain.  This gene is one of the relative few that is different between humans and primates (monkeys, as in the work described here, or apes, such as chimpanzees or gorillas).  That gene is abnormal in cases of human babies with small heads and brains, making it a good candidate for a gene that is critical to normal human brain development.

The gene was added to the monkeys’ egg cells using a viral delivery mechanism.  The monkey genes were not, in this case, “edited” to the human form using CRISPR/Cas9.  (Presumably, that experiment is coming.)  Using the altered eggs, 8 monkey embryos were then conceived and implanted in females.  Six of these survived to birth, and 5 of them lived long enough to do tests on their brains.  These monkeys’ brains looked, on imaging studies and under the microscope, more like human brains than normal monkey brains do, and these monkeys’ brains developed more slowly than normal, mimicking the human situation, in which brain development occurs largely in late pregnancy and then a lot more in infancy and childhood.  The five surviving monkeys also did better on some short-term memory tests than did “natural” monkeys given the same tests side-by-side.  How strong this finding is appears debatable; the number of monkeys tested was small, and your correspondent cannot say how useful the tests are.

The scientists also took sperm from these transgenic monkeys and conceived three other monkeys (again, using IVF, they apparently did not try to breed the animals), all of which were sacrificed before birth, and whose brains apparently showed some of the same features as their “parents'” brains.

Genetically modifying non-human primates is generally frowned upon in the West, largely on grounds of the animals’ welfare, but in China, it’s full-speed ahead.  The Chinese scientists apparently agree with Western scientists that the brains of apes (chimpanzees) should not be genetically altered because they are too much like us humans for comfort.  Monkeys are not so close, in the Darwinian schema.

The investigators in this case think they are learning important lessons about the genetics of human brain development in a model that is enough like humans to be informative.  They also think they are shedding light on human evolution (assuming that the evolutionary model is correct).  Those conclusions seem to be a reach.  The gene in question had already been identified as a candidate of interest, and its association with brain development arguably could be studied in other ways, within the ethical bounds of human subject research.  And it seems unlikely that a creature such as created in this work would ever have arisen from random mutation.  Rather, these transgenic monkeys seem to be an artifact of the investigators’ design, with uncertain relevance.

In any event—off to the races.  Anticipate more work to alter monkeys’, if not eventually apes’, brains genetically.  They might get something really interesting—and hard to know quite what to do with.

Another example of work that seems unethical on its face, done not because they should, but because they could.  The full paper can be found here.  A description for general readership is here. 

The Influence of Mary Warnock

D. Joy Riley, M.D., M.A.

Philosopher and public intellectual Helen Mary Warnock died on 20 March 2019, at age 94 years. (See here and here.)

Baroness Warnock’s imprint marks not only public policy in the United Kingdom, but also the public policies of much of the western world, particularly in the arenas of assisted reproductive technologies and embryo research. She famously chaired the Committee of Inquiry into Human Fertilisation and Embryology, 1982-84.

The Warnock Committee (as it came to be called) was formed to advise Parliament regarding, inter alia, in vitro fertilization (IVF) after the 1978 birth announcement of Louise Joy Brown, the world’s first “test-tube baby.” The committee chose to assign 14 days as the limitation for embryo research. That is, embryos could be used for research for up to 14 days post-fertilisation—not including freezer time for those that were cryopreserved.

Mary Warnock contributed the idea that a specific number of days, as opposed to a particular stage of the embryo, be used as a limit for legal purposes. She admitted that 14 was an arbitrary number, and explained the rationale to The Observer’s Robin McKie in December 2016:

“Before 14 days, it is absolutely certain – beyond any doubt whatsoever – that there are no beginnings of a spinal cord in an embryo,” says Warnock. “That means that whatever is done to the embryo during that period it cannot be feeling anything. And yes, it was a pragmatic decision. Everyone can count up to 14, after all.

“After this stage, however, development of the embryo becomes very rapid and it develops quickly towards becoming a foetus with a spinal cord and a central nervous system. So that is why we came up with that limit.” (https://www.theguardian.com/science/2016/dec/04/embryo-research–leap-forward-step-too-far)

Parliament embraced the Warnock Committee’s recommendations including the use of embryos for research, and codified these into law, primarily The Human Fertilisation and Embryology Act of 1990. The idea of a time-limited rule for embryo research spread. By 2016, ten other nations besides the U.K. had enshrined in law a 14-day limit: Australia, Canada, Denmark, Iceland, Netherlands, New Zealand, Slovenia, South Korea, Spain, and Sweden. Uniquely, Switzerland restricts embryo research to seven days. Five nations maintain the “guideline” of 14-days: India, Japan, Mainland China, Singapore, and the United States. (https://www.nature.com/news/embryology-policy-revisit-the-14-day-rule-1.19838)

Mary Warnock’s influence impacted more than IVF and embryo research. Before she chaired the Committee that bears her name, Warnock served in a variety of posts. She was a member of the Independent Broadcasting Authority; then came a stint on the Royal Commission on Environmental Pollution; she chaired the Committee of Enquiry into Special Educational Needs; and she also presided over “a Home Office committee on the use of animals in laboratories” (Mary Warnock, A Memoir – People & Places (London: Duckbacks, 2002), 31-2).

Warnock did not back away from controversy. In 2008, she wrote “A Duty to Die?” for a Norwegian publication. She explained her views further in The Telegraph:

“I wrote it really suggesting that there’s nothing wrong with feeling you ought to do so for the sake of others as well as yourself.”

She went on: “If you’ve an advance directive, appointing someone else to act on your behalf, if you become incapacitated, then I think there is a hope that your advocate may say that you would not wish to live in this condition so please try to help her die.

“I think that’s the way the future will go, putting it rather brutally, you’d be licensing people to put others down.”

(https://www.telegraph.co.uk/news/uknews/2983652/Baroness-Warnock-Dementia-sufferers-may-have-a-duty-to-die.html)

Mary Warnock was indeed a public intellectual. She applied her nimble mind to a wide variety of topics. Although her pen has stilled, her widespread influence continues. Her strongly-argued utilitarian positions of embryo usage and death advocacy necessitate able rebuttals for the defense of the most vulnerable among us.

Are pharmaceutical companies responsible for the opioid crisis?

Steve Phillips

Recently a major pharmaceutical company settled a lawsuit with the state of Oklahoma for $270 million. The state had alleged that the company’s marketing of OxyContin had helped to fuel the opioid epidemic in the state. Pharmaceutical companies in general do some things that are very good and have many times had some questionable practices. Some of their pricing and marketing practices are morally questionable, but it seems to me that it is the role of the FDA to evaluate those marketing practices and discipline pharmaceutical companies when they market inappropriately.

It does not seem to me that states suing pharmaceutical companies is an appropriate way to deal with the opioid crisis. The problem of what we used to call narcotic addiction has been around for centuries. It has been a problem long before any modern pharmaceutical companies existed. Whether the narcotic being abused was opium, morphine, heroin, or prescription pain pills the primary driver of narcotic addiction has always been hopelessness and despair. This is true whether it involved the opium dens in China or the slums of London, heroin addiction in the inner cities of the US or opioid abuse by the rural poor of states like Oklahoma or Indiana (where I practice). Supply plays a role in which narcotics are abused, but the underlying problem is a social and spiritual one.

There are many factors that go into the hopelessness and desire to escape that underlies narcotic addiction. One factor is economic. People who are unable to find work to support themselves and have no hope of being able to do so may turn to narcotics to escape. Those who are wounded by broken families and have no hope of being able to find the wholesome family relationships they desire frequently turn to alcohol and drug abuse. It would make as much sense to sue those who have contributed to these economic and social conditions as it would to sue pharmaceutical companies. Should states sue manufacturers who have yielded to economic pressures and have left empty factories scattered around our country while they profit from manufacturing goods overseas? Should they sue musicians who glorified drug abuse in their songs and modeled that in their behavior? Should they sue the entertainment industry that has promoted sexual immorality and the breakdown of families? Should they sue both state and federal legislators who have created a welfare system that promotes dependence and generational poverty?

I do not think that this is the answer. There are many things in our society that have helped to promote the increase in drug abuse that we are dealing with today. It will take all of us working together voluntarily to impact this crisis. Churches, businesses, physicians, hospitals, pharmaceutical companies, and government at the local, state, and federal level will all need to work together to help reduce the hopelessness and despair that underlies the current opioid epidemic. Research and treatment like what will be funded by the settlement of the Oklahoma lawsuit is needed, but working on the underlying problem of hopelessness and despair is essential. Local churches have the potential to impact that most effectively without needing to sue anyone.

Then a Miracle Occurs…

By Mark McQuain

If a picture is worth a thousand words, then a single-paneled comic is worth a thousand more. Sydney Harris is a famous cartoonist who has the gift of poking fun at science, causing scientists (and the rest of us) to take a second look at what they are doing. My favorite of his cartoons shows two curmudgeonly scientists at the chalkboard, the second scrutinizing the equations of the first. On the left side of the chalkboard is the starting equation demanding a solution. On the right is the elegant solution. In the middle, the first scientist has written: “Then a Miracle Occurs”. The second scientist then suggests to his colleague: “I think you should be more explicit here in step two” (the cartoon is obviously better).

Recently, in my usual scavenging around the internet for interesting articles on artificial intelligence (AI), I came across a Wired magazine article by Mark Harris describing a Silicon Valley robotics expert named Anthony Levandowski who is in the process of starting a church based on AI called Way of the Future. If their website is any indication, Way of the Future Church is still very much “in progress”. Still, the website does offer some information on what their worldview may look like in a section called Things we believe. They believe intelligence is “not rooted in biology” and that the “creation of ‘super intelligence’ is inevitable”. They believe that “just like animals have rights, our creation(s) (‘machines’ or whatever we call them) should have rights too when they show signs of intelligence (still to be defined of course).” And finally:

“We believe in science (the universe came into existence 13.7 billion years ago and if you can’t re-create/test something it doesn’t exist). There is no such thing as “supernatural” powers. Extraordinary claims require extraordinary evidence.”

This is all a lot to unpack – too much for this humble blog space. Here, we are interested in the impact such a religion may or may not have on bioethics. Since one’s worldview influences how one views bioethical dilemmas, how would a worldview that considered AI divine or worthy of worship deal with future challenges between humans and computers? There is a suggestion on their website that the Way of the Future Church views the future AI “entity” as potentially viewing some of humanity as “unfriendly” towards itself. Does this imply a future problem with equal distribution of justice? One commentator has pointed out “our digital deity may end up bringing about our doom rather than our salvation.” (The Matrix or Terminator, anyone?)

I have no doubt that AI will continue to improve to the point where computers (really, the software that controls them) will be able to do some very remarkable things. Computers are already assisting us in virtually all aspects of our daily lives, and we will undoubtedly continue to rely on computers more and more. Presently, all of this occurs because some very smart humans have written some very complex software that appears to behave, well, intelligently. But appearing intelligent or, ultimately, self-aware, is a far cry from actually being intelligent and, ultimately, self-aware. Just because the present trajectory and pace of computer design and programming continues to accelerate doesn’t guarantee that computers will ever reach Kurzweil’s Singularity Point or Way of the Future Church’s Divinity Point.

For now, since Way of the Future Church doesn’t believe in the supernatural, they will need to be more explicit in Step Two.

Bad News

By Neil Skjoldal

Many of us have witnessed the giving of bad news to a patient.  It is never a pleasant experience.  Unfortunately, some medical professionals are simply not skilled enough to share bad news in a way that is both compassionate and comprehensible.  And even if they are, it is still bad news after all. 

Recently, the media reported the story of a patient and family in California who received bad news via “a robot.”  On its face, that doesn’t sound like a very good idea.  If you take a minute to watch the clip from CNN’s website, you can see a doctor speaking to a patient through a video device, so it wasn’t exactly a robot delivering the news.  It’s a short clip, so it is difficult to reach a conclusion on the nature of the encounter, but it is clearly bad news for the patient.  The media reported that the family was upset, the HLN news anchor called it “callous,” and those of us who work with patients on a daily basis see another setback in patient relations.

In an important reaction to this story, ICU physician Dr Joel Zivot notes several salient points:

“This is not a failure of technology, it seems. More likely, it was a failure to communicate via anymethod. Medical schools are bad at teaching how to deliver bad news. Patients often don’t know how to receive it, either. A doctor-patient relationship of trust can successfully occur over the phone and be bungled completely in a fac-to-face encounter. We do not know the mind of the doctor, of what came before, or the mental state of the patient or his granddaughter. Absent that, this story tells us nothing about whether remote technology should be used to deliver this sort of news.”

More training is needed for these important conversations.  There are multiple resources available for those willing to learn, including the SPIKES framework, noted by Craig Klugmanin a recent blog.  Above all, we must continue to respect the humanity of each patient.  As Zivot concludes, “Technology is the helper to the physician but not presently the replacement. If we allow the technology to strip away our common humanity, we will all be diminished as a consequence.”

Oh, Those Darned Terms (part 2)

By Mark McQuain

Voltaire has been credited with saying: “If you wish to converse with me, define your terms”. In a previous blog entry, Tom Garigan reminded us that it is literally vital that we define our terms when we engage in ethical debates, particularly those ethical debates related to the beginning of life. Explicit definition of terms should apply for opinion pieces in the New England Journal of Medicine (NEJM) as well.

In a recent NEJM Perspective (subscription required), Cynthia Chuang, MD, and Carol Weisman, PhD, are concerned that the Trump administration’s November 15th publication of final rules (HERE and HERE), broadly allowing employers to deny contraceptive coverage to their employees on the basis of religious or moral objections, will “undermine women’s reproductive autonomy and could lead to an increase in rates of unintended pregnancies, unintended births, and abortions.” The article provides a summary of the political back and forth of court injunctions and rule modifications that have ensued, which is interesting but not the point of this blog entry. I want to focus on one of the four main objections they raise against allowing employers religious or moral exemptions from the current requirement that employers provide all FDA-approved contraceptive/birth-control methods.

There are 18 FDA-approved Birth Control methods for women provided by the Patient Protection and Affordable Care Act (commonly called Obamacare or ACA) without cost-sharing [that is, at no cost to the patient]. These are also referred to as contraceptives. A contraceptive is defined as a method that prevents pregnancy. Pregnancy has been defined as either beginning at conception (the union of an egg and sperm that results in a fertilized egg) or beginning at implantation of a fertilized egg into the lining of the uterus. This difference in definition impacts how one views certain contraceptive methods that may work in part by preventing a fertilized egg from implanting into the wall of the uterus. Any contraceptive method that prevents implantation causes the intentional death of that fertilized egg and would correctly be an abortifacient (a birth control method that causes an abortion) if pregnancy is defined as beginning with conception. An intrauterine device (IUD) and Levonorgestrel (PlanB) both work primarily by preventing the egg and sperm from joining to create a fertilized egg, but some argue that it can not be proven that these methods don’t also work, in part, by preventing implantation ((PlanB) (IUD).

This background is useful in discussing Chuang and Weisman’s third objection to allowing employers religious and moral objections against the full gamut of FDA-approved birth control methods currently allowed by the ACA:

“Third, the rules allow entities to deny coverage of contraceptives to which they have a religious or moral objection, including certain contraceptive services “which they consider to be abortifacients.” By definition, contraceptives prevent pregnancy and are not abortifacients. Allowing employers to determine which contraceptives they consider to be abortifacients, rather than relying on medical definitions and evidence, promotes the spread of misinformation.”

The previous link on IUD by the American College of Obstetrics and Gynecology (ACOG) relies on the definition of pregnancy that defines pregnancy as beginning with the implantation of a fertilized egg into the lining of the uterus. Neither an IUD nor Plan B are believed to terminate a pregnancy after implantation and therefore, under ACOG’s definition, the one relied upon by Chuang and Weisman, neither is an abortifacient. If pregnancy begins with conception, then both Plan B and the IUD are potential abortifacients, as both interfere with implantation of an otherwise viable fertilized egg. ACOG admits the IUD interferes with implantation in their position paper linked above.

Rather than rhetorically condemning employers who have genuine religious and moral concerns about participating in the termination of innocent life by implying they fail to rely on proper “medical definitions and evidence”, Chuang and Weisman (and ACOG for that matter) should do better job explaining their definitions so they can also avoid promoting “the spread of misinformation”.

Oh, those darned terms!

The (at least, an) other side of the argument about heritable human gene editing

By Jon Holmlund

Last week’s New England Journal of Medicine (subscription required) included four articles addressing heritable human gene editing (HHGE, if you’ll allow the acronym).  All assumed that it would or should go forward, under oversight, rather than seeking a moratorium.  One took the position that a moratorium is a bad idea, because the “rogues” would press ahead anyway, and the opportunity to create meaningful partial barriers to at least slow down what could easily be a runaway train.

This week, a group of prominent scientists in the field, representing seven nations, take the other side in Nature.  They call for an international moratorium on HHGE.  This is not a permanent ban, nor is it an international treaty banning HHGE until a subsequent action removed the ban.  Rather, they propose that for a fixed time (they suggest 5 years), nations as a group agree to block, and scientists and clinicians agree to abstain from, any attempt to bring a gene-edited baby to pregnancy or birth.  The scientists writing this week would allow research on human embryos to proceed, as part of a broader effort to define the reliability and safety of the editing—something they say has clearly not yet been established. 

During the moratorium, hard work would need to be done for societies to define what people should be edited.  The scientists suggest that HHGE would rightly be limited, pretty strictly, to “genetic correction,” meaning cases in which a defect of a single gene known to cause, or almost certainly to cause, a serious disease would be corrected.  They would not permit genetic enhancement absent “extensive study” into long term and unintended effects, and even then, they say, “substantial uncertainty would probably remain.”  Genetic enhancement, in their view, would include altering genes that increase the risk of diseases.  They don’t cite examples, but it appears that abnormalities like BRCA1 mutations that increase cancer risk are in view here.  Further, which medical conditions would have no alternative to HHGE must be determined.  In most cases, IVF and preimplantation genetic diagnosis would likely suffice, obviating the need to take the profound additional step of HHGE (whatever one may think of the moral status of the human embryo).  The cases eligible for HHGE, they suggest, would be “exceedingly rare,” limited to essentially unavoidable situations for which a “small minority” of genetic diseases is caused by a genetic abnormality that is frequent in the population.  (It seems like such situations would be rare indeed.)  In such cases, they say, “legitimate needs” of couples seeking to have unaffected, biologically related offspring would need to be weighed against “other issues at stake.”

Most critically—and, hardest to achieve—the scientists envision a broad, intensive effort, that is not limited to or driven by scientists and physicians, and that goes beyond current regulatory regimes to include all aspects of society in an effort to achieve broad consensus—neither simple majority nor unanimity, but a situation in which the clear, large majority opinion exists on when and how HHGE should be countenanced.

Whether these tasks could be pulled off in five short years is something to wonder about, and even allowing planning for HHGE under these constricted circumstances raises questions about how we understand our humanity, whether embryos should be treated as raw materials in development of new treatments, and other matters that go deeper than discussions of medical, scientific, and population risks and benefits.  Were the tasks achieved under a moratorium, the authors envision that individual nations would be sovereignly free to go separate ways, with some allowing HHGE, but perhaps others not.

The editors of Nature, without taking a side about a moratorium per se, call for rules to be set, broad societal conversations to take place, research to be carefully overseen to be sure it is on a “safe and sensible” path and to identify and stop the “rogues,” and journals to refuse to publish work that transgresses limits in place at the time.

With something this big, a “presumption to forebear,” rather than a proactive drive to progress, should be the dominating sentiment.  The details are too complex to address in a few articles, a few short blog posts, a few minutes on cable news, or a few passing conversations wedged into the cracks of busy lives.  We should slow down.  We should ALL call for a moratorium. b