Technical steps to gene-edited babies

This blog has carried several comments about the prospect of heritable human gene editing.  While nearly no one currently supports bringing such babies to birth—and condemns those who would rush ahead to do so—it appears a distinct minority think that we the human race should, if we could, agree never to do such a thing.  The most cautious perspective is to advocate a moratorium.  Others in favor of proceeding argue that, in essence, with the technologic genie (my term, not necessarily theirs) out of the box, a moratorium, much less a ban, is futile; the “rogues” will press ahead, casting off restraint. 

Advocates of research in this area have argued that a clear, careful, regulated pathway is needed to guide the work through necessary laboratory experiments that should be done first, before making a woman pregnant with a gene-edited embryo, in an attempt to be sure that the process is safe and highly likely to yield the intended result.  Even a moratorium would be, by definition, temporary, leaving the question, “when we will know to remove the moratorium?” to be answered.

A feature article in Nature, accessible without a paid subscription, asks “When will the world be ready” for gene-edited babies.  It walks through scientists’ understanding of what the technical issues are.  It is longer than a blog post, so I can only list key points here.  It is worth a reading by anyone interested, and it is written in sufficiently non-technical language that it’s accessible to the general, non-scientist public.

Key concerns are:

  • How would we be sure that genes that were NOT intended to be edited, in fact were not?
  • How would we be sure that genes that ARE intended to be edited are edited correctly?

These two matters have been addressed to some degree, or could be, in animals, but that would be faster and easier than in human egg cells or human embryos, and the results in animals may be different from what is found in the embryos.  (A further question is how many embryos, observed for how long, would need to be studied to support confidence.)

  • Even if the intended gene edit is made, is it clear that doing so is safe and does not induce other health risks? 

This blog recently reported the UK study that suggested that changes in the gene edited in the twin girls born in China last year might eventually reduce life span.  A criterion promulgated in 2017 by the National Academies of Sciences, Engineering, and Medicine was that the edited gene should be common in the population and carry no known risk (including, presumably, no increased risk) of disease.  Such knowledge is lacking for human populations, and what is believed known about the association of genes with risk of future disease has often been developed in Western populations, and may not apply to, for example, Africans.

  • At least some embryos would include some edited and some non-edited cells.  It would not easily be possible, if possible at all, to tell how many of which were present, or needed to be for the editing to work and not cause risks to the embryo’s development into a baby and beyond.  And what answers were obtained would require manipulating healthy embryos after in vitro fertilization.  The outcomes could not be predicted from first principles.
  • What should a clinical trial look like?  How many edited children would have to be born, and their health (and, most likely, the health of their progeny) observed for how long to get provisional answers before practicing the technique more widely?  Or, would the work proceed as IVF did—with dissemination in the general public, and no regulated research?

A US and UK committee is planned to address these questions, with the intent of proposing guidelines in 2020.  This will be important to follow, but with no chance to affect.  Most of us will just be watching, which leads to the last concern:

  • Is the world ready?

If that means, is there an international, or even a national, consensus, then the answer is clearly “no.”  That almost certainly remains “no” if one asks whether there is a future prospect for consensus.  It’s hard to envision something other than different groups and nations holding different judgments, and, most likely, remaining in some degree of irresolvable conflict.

More gene-edited babies on the way

It is reported this week that a Russian scientist plans to edit the genes of more human embryos intending to bring gene-edited babies to birth.  As with the case in China last year, the intent is to edit a gene called CCR5 that is responsible for a receptor that facilitates initiation of HIV infection.  The stated reason is to prevent transmission of infection from the mother, not the father, as in the Chinese case.  Maternal transmission of HIV is a real risk, but there are other ways to prevent it, with medications.  And, as recently reported on this blog, the risks of editing this gene are not understood, nor are the long-term risks of heritable genome editing.

The science press is saying that someone should put a stop, now, to bringing edited embryos to pregnancy and birth.  But it is unlikely that effective action can be taken.  The public will has not been engaged, necessary medical research controls are not in place, and no one can say just who would have the authority to take what sort of action.

So for the moment there is little else to say.  We will hear of more cases.  We will find out later how we will respond.  Clarity and consistency of that response seem unlikely. https:/

Pragmatism and principle regarding human gene editing

You may have seen in the general press that the gene-edited twin girls born in China last year may have had their life expectancies shortened in the bargain.  The doctor who edited the babies’ genes specifically edited one gene, that is associated with susceptibility to HIV infection.  Their father is HIV positive, but that does not put the babies at any health risk.  Further, the gene editing potentially could have increased their future risk for other infections.  Now, a group in the United Kingdom have analyzed mortality data for about 400,000 people who volunteered to have their genetic information placed in a data bank.  They reported that people who have a gene mutation similar—but apparently not identical—to the change made in the Chinese babies had a 21% lower chance of living to age 76 than people without the mutation.  Now, the average age of the people who volunteered their information for the data bank is said to be 56.5 years, so the implication is that there is a shortening of life expectancy after middle age, for people who have lived at least that long. 

One should interpret the U.K. analysis with caution, but the argument seems to be, “see, we don’t know the risks of human gene editing, so we shouldn’t be doing it.”  And indeed we do not know the risks.  But the argument in fact is, “…we shoudn’t be doing this—at least not yet.”  As Joy Riley pointed out on this blog a few days ago, the goal of a moratorium on human genome editing appears to be to allow the scientists working on the technology to take time to build public trust and consensus for it.  “We shouldn’t be doing this, ever” does not appear to be an option.  Long-term readers of this blog may recall numerous posts over the last few years describing this process of gradual acceptance in the scientific community.  The scientists draw an analogy to the 1975 Asilomar conference on recombinant DNA work, which established guard rails around that work.  But the analogy is flawed.  The risks of the work addressed at Asilomar were more readily defined, with shorter time frames to results, than can be addressed with genome editing.  400,000 middle-aged people’s mortality due to any (unspecified) cause over the ensuing quarter-century?  How many edited people, studied for how long, over how many generations, with what consent process, to determine the risks?  There can be no acceptable definition of the risks prior to actually assuming them.  “The babies are the experiment.”

The correct conceptual framework for human genome editing is not benefit-risk analysis, it is deeper reflection on where we should not let engineering encroach on the human organism.  “Keep your ambition off our bodies,” I suppose.  And when we think in those terms, we should quickly recognize territory where we fear to tread at all, not just slow down.

Proposed moratorium on human germline: Asilomar analogue?

The Editorial Board of The Washington Post (WaPo) recently published their opinion  on regulation of heritable genetic changes in human eggs, sperm, and embryos. The authors expressed some measure of relief that organizations such as the National Academies in the U.S., the Royal Society in Britain, and the World Health Organization are beginning to consider the daunting topic of human heritable genetic changes. The board advised, “The goal must be a framework that will enable genuine scientific advancement but avoid reckless fiddling with the source code of life.”

The WaPo editorial board further recommended “something of similar scope and power” to that of the Asilomar Conference on Recombinant DNA Molecules, held in February 1975. Asilomar, as that conference came to be called, was convened to evaluate the risks posed by the novel technology of genetically modifying organisms. The public perception of Asilomar has been primarily one of scientists shouldering the “social responsibility of science.”

Further, the WaPo article pointed out that one of the authors of the March 2019 Nature commentary calling for a “global, temporary moratorium on clinical uses of human germline editing” was Paul Berg, a Nobel laureate, and one of the organizers of the Asilomar conference. The Nature commentary, authored by Eric Lander, Françoise Baylis, Feng Zhang, Emmanuelle Charpentier, and Paul Berg, described the consensus for a moratorium thusly:

The 18 signatories of this call include scientists and ethicists who are citizens of 7 countries. Many of us have been involved in the gene-editing field by developing and applying the technology, organizing and speaking at international summits, serving on national advisory committees and studying the ethical issues raised.

This description appears to differ substantively from one Berg gave of the Asilomar analogue. In an 18 June 2011 video interview by Larry Goldstein, Berg had this to say about the success of Asilomar:

We made some decisions that were smart in retrospect. For example, one of the things we did not do – and did not include in any way in the agenda was the ethics. We didn’t talk about genetic testing… we talked about real experiments, and what the impact of those experiments would be in the field (10:40-10:58)

Of the five authors calling for a moratorium on human heritable genetic changes, only Françoise Baylis is an ethicist. A 2004 article penned by Baylis and Jason Scott Robert, “The Inevitability of Genetic Enhancement Technologies,” gives insight to her views. The paper concludes with

. . . we maintain that accepting the inevitability of genetic enhancement technologies is an important and necessary step forward in the ethical debate about the development and use of such technologies. We need to change the lens through which we perceive, and therefore approach, the prospect of enhancing humans genetically. In recognising the futility of trying to stop these technologies, we can usefully direct our energies to a systematic analysis of the appropriate scope of their use. The goal of such a project would be to influence how the technologies will be developed, and the individual, social, cultural, political, economic, ecological, and evolutionary ends the technologies should serve. It is to these tasks that bioethical attention must now fully turn.

It appears that 1) Paul Berg’s previous concerns about “ethics” being involved is not a problem to date in this enterprise; and 2) the called-for moratorium is truly only a “speed bump” on the road to converting future generations into our own laboratory experiments. The “individual” ends such experiments will serve are likely to be the individuals who are paid handsomely to do such experiments or who hold the patents to the processes utilized – not the individuals formed. Despite the extensive embrace of heritable human genome editing by the principals cited here, we need to remember that this is not a road that we must travel. Future generations are not our playground. We need to rethink this “moratorium”:  It should be an outright ban.

Emerging attempts to control gene editing

Recently, it was reported that the panel convened by the World Health Organization (WHO) to develop standards and guidelines for gene editing will ask the WHO to establish a registry for any projects on heritable human gene editing.  The idea is that, to get research funding, a project would have to be registered, and there would be a required review in order to get on the registry in the first place.  The net effect would be to control the flow of money to such projects.

Also, according to Nature, the Chinese government is looking at amending its civil code, effective March 2020, to in essence make a gene editor liable for health outcomes by declaring that “experiments on genes in adults or embryos that endanger human health or violate ethical norms can accordingly be seen as a violation of a person’s fundamental rights.”  The idea here appears to be to make gene editors think twice about whether they are sure enough of their work to accept essentially a permanent risk of being sued for all they are worth if anything goes wrong in the future.  Your correspondent knows nothing about Chinese civil procedure, but in the litigious U.S., the risk of really big, unpredictable lawsuits at some entirely unpredictable time in the future, with no limit, can make even big companies shy to pursue something. 

So maybe these approaches, by “following the money,” as it were, would at least slow down heritable genome editing, short of a ban.  Skeptics of the utility or wisdom of a ban argue that the “rogues” will just find work-arounds anyway, and that entire states can “go rogue,” limiting the effects of the ban to only the nations willing to enact and enforce it.

That’s a reasonable argument, but it still seems that, by only requiring a registry—with noncompliance always a risk—or trying to up the ante in court—a risk that some entities might take if the perceived reward is big enough to warrant it, and they can hire enough expansive lawyers to limit the risk—there is an admission that heritable genome editing is going to go forward.  And, indeed, maybe there’s no stopping it.  But it seems like promoting a stance toward human life that refuses to accept heritable gene editing is still something we should do.

Brain resuscitation (?) in pigs

The latest mind-blowing (seriously, no pun intended) report from the science literature is that a team of scientists at Yale Medical School have been able to use an artificial preservative solution to recover electrical activity in some of the cells of the brains from the severed heads of pigs that had been slaughtered for food.  This is absolutely stunning because the understanding—so widely accepted that the term “conventional wisdom” is trite in this case—that the brain’s need for oxygen, nutrients, and the blood flow that provides them is so massive, so constant that an interruption of even a few minutes means irreversible death of brain tissue.  This can be in part of the brain (as in a stroke), or the whole brain (as in brain death).  Your correspondent is not a neuroscientist, but understands that recent research is showing the human brain, anyway, to be more adaptable than historically understood, meaning that after an event like a stroke, function can be restored over time with rehabilitative efforts that support the remaining, undamaged brain tissue adapting to the damage.

In this case, it was four hours after the pigs’ deaths that the researchers isolated their brains and put them into the solutions.  Besides the electrical activity in some nerve cells, the researchers also found evidence that blood vessels could support circulation, and that there was metabolic (energy-using) activity in the isolated brains.  Evidence that the whole brain was working, and able to, for example, “feel” pain or detect stimuli, was not evident, but the researchers were not trying to do that.  Their immediate goal was apparently to understand how long brain cell function might be preserved.

Before we rush to invoke the immortal Viktor Frankenstein, it should be said that the researchers in this case appear to have carefully followed existing ethical guidelines for the research use of animals.  And it is tempting to speculate about this work leading to new treatments for brain injury.

Still, many ethical issues are raised.  What constraints should proper ethics of experimentation on animals put on future, similar experiments?  Is it acceptable to pursue a model for whole animal or even human brains preserved outside the body to study preservation and restoration of function, perhaps even to the point of trying to “jump start” the whole brain, as the current researchers speculate might be necessary.  Or, such a recovery might be impossible; they say they might just be observing an evitable process of brain death and decay.  Maybe it takes rather longer than previously appreciated.

That last point raises further concerns about how we understand when death has occurred.  Do current approaches toward harvesting human organs for transplantation, that may require that blood flow to the brain be interrupted for only a matter of minutes before declaring death of the donor, effectively jump the gun?  Might some people who are thought brain dead in fact have better chance of recovery than appreciated?  These questions already trouble ethicists thinking about how to determine when a person has died.

These are only a few of the concerns, and some authors this week are calling for an international review of the ethics of this work, before proceeding further with research on mammals—never mind humans, that’s not in view, yet.

A summary of the work for the non-specialist is openly available.  Summaries of related ethical issues, also openly available, can be found here and here.  The full scientific report in Nature requires subscription or purchase.

Are AI Ethics Unique to AI?

A recent article in by Cansu Canca entitled “A New Model for AI Ethics in R&D” has me wondering whether the ethics needed for the field of Artificial Intelligence (AI) requires some new method or model of thinking about the bioethics related to that discipline. The author, a principal in the consulting company AI Ethics Lab, implies that there might be. She believes that the traditional “Ethics Oversight and Compliance Review Boards”, which emerged as a response to the biomedical scandals of World War II and continue in her view to emphasize a heavy-handed, top-down, authoritative control over ethical decisions in biomedical research, leave AI researchers effectively out-of-the-ethical-decision-making loop.

In support of her argument, she cites the recent working document of AI Ethics Guidelines by the European Commission’s High-Level Expert Group on Artificial Intelligence (AI HLEG). AI HLEG essentially distilled their AI ethical guidelines down to the familiar: Respect for Autonomy, Beneficence, Non-Maleficence, and Justice, as well as one new principle: Explicability. She downplays Explicability as simply the means to realize the other four principles. I think the demand for Explicability is interesting in its own right and will comment on that below.

Canca sees the AI HLEG guidelines as simply a rehash of the same principles of bioethics available to current bioethics review boards, which, in her view, are limited in that they provide no guidance for such a board when one principle conflicts with another. She is also frustrated that the ethical path researchers are permitted continues to be determined by an external governing board, implying that “researchers cannot be trusted and…focuses solely on blocking what the boards consider to be unethical.” She wants a more collaborative interaction between researchers and ethicists (and presumably a review board) and outlines how her company would go about achieving that end.

Faulting the “Principles of Biomedical Ethics” for failing to be determinant on how to resolve conflicts between the four principles is certainly not a problem unique to AI. In fact, Beauchamp and Childress repeatedly explicitly pointed out that the principles cannot be independently determinant on these types of inter-principle conflicts. This applies to every field in biomedical ethics.

Having an authoritative, separate ethical review board was indeed developed, at least in part, because at least some individual biomedical researchers in the past were untrustworthy. Some still are. We have no further to look than the recent Chinese researcher He Jiankui, who allegedly created and brought to term the first genetically edited twins. Even top-down, authoritative oversight failed here.

I do think Canca is correct in trying to educate both the researchers and their companies about bioethics in general and any specific bioethical issues involved in a particular research effort. Any effort to openly identify bioethical issues and frankly discuss potential bioethical conflicts at the outset should be encouraged.

Finally, the issue of Explicability related to AI has come up in this blog previously. Using the example of programming a driverless car, we want to know, explicitly, how the AI controlling that car is going to make decisions, particularly if it must decide how to steer the car in a no-win situation that will result in the death of either occupants inside the car or bystanders on the street. What we are really asking is: “What ethical parameters/decisions/guidelines were used by the programmers to decide who lives and who dies?” I imagine we want this spelled-out explicitly in AI because, by their nature, AI systems are so complex that the man on the Clapham omnibus (as well as the bioethicist sitting next to him) has no ability to determine these insights independently.

Come to think about it, Explicability should also be demanded in non-AI bioethical decision-making for much the same reason.

Human brain genes in monkeys

By Jon Holmlund

This week’s news is that a group of Chinese researchers have birthed and studied a small number of rhesus monkeys that were “transgenic” for a human gene associated with brain development.  In this work, monkey eggs (oocytes) were altered by adding the human form of a gene that is believed important to the development of the brain.  This gene is one of the relative few that is different between humans and primates (monkeys, as in the work described here, or apes, such as chimpanzees or gorillas).  That gene is abnormal in cases of human babies with small heads and brains, making it a good candidate for a gene that is critical to normal human brain development.

The gene was added to the monkeys’ egg cells using a viral delivery mechanism.  The monkey genes were not, in this case, “edited” to the human form using CRISPR/Cas9.  (Presumably, that experiment is coming.)  Using the altered eggs, 8 monkey embryos were then conceived and implanted in females.  Six of these survived to birth, and 5 of them lived long enough to do tests on their brains.  These monkeys’ brains looked, on imaging studies and under the microscope, more like human brains than normal monkey brains do, and these monkeys’ brains developed more slowly than normal, mimicking the human situation, in which brain development occurs largely in late pregnancy and then a lot more in infancy and childhood.  The five surviving monkeys also did better on some short-term memory tests than did “natural” monkeys given the same tests side-by-side.  How strong this finding is appears debatable; the number of monkeys tested was small, and your correspondent cannot say how useful the tests are.

The scientists also took sperm from these transgenic monkeys and conceived three other monkeys (again, using IVF, they apparently did not try to breed the animals), all of which were sacrificed before birth, and whose brains apparently showed some of the same features as their “parents'” brains.

Genetically modifying non-human primates is generally frowned upon in the West, largely on grounds of the animals’ welfare, but in China, it’s full-speed ahead.  The Chinese scientists apparently agree with Western scientists that the brains of apes (chimpanzees) should not be genetically altered because they are too much like us humans for comfort.  Monkeys are not so close, in the Darwinian schema.

The investigators in this case think they are learning important lessons about the genetics of human brain development in a model that is enough like humans to be informative.  They also think they are shedding light on human evolution (assuming that the evolutionary model is correct).  Those conclusions seem to be a reach.  The gene in question had already been identified as a candidate of interest, and its association with brain development arguably could be studied in other ways, within the ethical bounds of human subject research.  And it seems unlikely that a creature such as created in this work would ever have arisen from random mutation.  Rather, these transgenic monkeys seem to be an artifact of the investigators’ design, with uncertain relevance.

In any event—off to the races.  Anticipate more work to alter monkeys’, if not eventually apes’, brains genetically.  They might get something really interesting—and hard to know quite what to do with.

Another example of work that seems unethical on its face, done not because they should, but because they could.  The full paper can be found here.  A description for general readership is here. 

Then a Miracle Occurs…

If a picture is worth a thousand words, then a single-paneled comic is worth a thousand more. Sydney Harris is a famous cartoonist who has the gift of poking fun at science, causing scientists (and the rest of us) to take a second look at what they are doing. My favorite of his cartoons shows two curmudgeonly scientists at the chalkboard, the second scrutinizing the equations of the first. On the left side of the chalkboard is the starting equation demanding a solution. On the right is the elegant solution. In the middle, the first scientist has written: “Then a Miracle Occurs”. The second scientist then suggests to his colleague: “I think you should be more explicit here in step two” (the cartoon is obviously better).

Recently, in my usual scavenging around the internet for interesting articles on artificial intelligence (AI), I came across a Wired magazine article by Mark Harris describing a Silicon Valley robotics expert named Anthony Levandowski who is in the process of starting a church based on AI called Way of the Future. If their website is any indication, Way of the Future Church is still very much “in progress”. Still, the website does offer some information on what their worldview may look like in a section called Things we believe. They believe intelligence is “not rooted in biology” and that the “creation of ‘super intelligence’ is inevitable”. They believe that “just like animals have rights, our creation(s) (‘machines’ or whatever we call them) should have rights too when they show signs of intelligence (still to be defined of course).” And finally:

“We believe in science (the universe came into existence 13.7 billion years ago and if you can’t re-create/test something it doesn’t exist). There is no such thing as “supernatural” powers. Extraordinary claims require extraordinary evidence.”

This is all a lot to unpack – too much for this humble blog space. Here, we are interested in the impact such a religion may or may not have on bioethics. Since one’s worldview influences how one views bioethical dilemmas, how would a worldview that considered AI divine or worthy of worship deal with future challenges between humans and computers? There is a suggestion on their website that the Way of the Future Church views the future AI “entity” as potentially viewing some of humanity as “unfriendly” towards itself. Does this imply a future problem with equal distribution of justice? One commentator has pointed out “our digital deity may end up bringing about our doom rather than our salvation.” (The Matrix or Terminator, anyone?)

I have no doubt that AI will continue to improve to the point where computers (really, the software that controls them) will be able to do some very remarkable things. Computers are already assisting us in virtually all aspects of our daily lives, and we will undoubtedly continue to rely on computers more and more. Presently, all of this occurs because some very smart humans have written some very complex software that appears to behave, well, intelligently. But appearing intelligent or, ultimately, self-aware, is a far cry from actually being intelligent and, ultimately, self-aware. Just because the present trajectory and pace of computer design and programming continues to accelerate doesn’t guarantee that computers will ever reach Kurzweil’s Singularity Point or Way of the Future Church’s Divinity Point.

For now, since Way of the Future Church doesn’t believe in the supernatural, they will need to be more explicit in Step Two.

The (at least, an) other side of the argument about heritable human gene editing

By Jon Holmlund

Last week’s New England Journal of Medicine (subscription required) included four articles addressing heritable human gene editing (HHGE, if you’ll allow the acronym).  All assumed that it would or should go forward, under oversight, rather than seeking a moratorium.  One took the position that a moratorium is a bad idea, because the “rogues” would press ahead anyway, and the opportunity to create meaningful partial barriers to at least slow down what could easily be a runaway train.

This week, a group of prominent scientists in the field, representing seven nations, take the other side in Nature.  They call for an international moratorium on HHGE.  This is not a permanent ban, nor is it an international treaty banning HHGE until a subsequent action removed the ban.  Rather, they propose that for a fixed time (they suggest 5 years), nations as a group agree to block, and scientists and clinicians agree to abstain from, any attempt to bring a gene-edited baby to pregnancy or birth.  The scientists writing this week would allow research on human embryos to proceed, as part of a broader effort to define the reliability and safety of the editing—something they say has clearly not yet been established. 

During the moratorium, hard work would need to be done for societies to define what people should be edited.  The scientists suggest that HHGE would rightly be limited, pretty strictly, to “genetic correction,” meaning cases in which a defect of a single gene known to cause, or almost certainly to cause, a serious disease would be corrected.  They would not permit genetic enhancement absent “extensive study” into long term and unintended effects, and even then, they say, “substantial uncertainty would probably remain.”  Genetic enhancement, in their view, would include altering genes that increase the risk of diseases.  They don’t cite examples, but it appears that abnormalities like BRCA1 mutations that increase cancer risk are in view here.  Further, which medical conditions would have no alternative to HHGE must be determined.  In most cases, IVF and preimplantation genetic diagnosis would likely suffice, obviating the need to take the profound additional step of HHGE (whatever one may think of the moral status of the human embryo).  The cases eligible for HHGE, they suggest, would be “exceedingly rare,” limited to essentially unavoidable situations for which a “small minority” of genetic diseases is caused by a genetic abnormality that is frequent in the population.  (It seems like such situations would be rare indeed.)  In such cases, they say, “legitimate needs” of couples seeking to have unaffected, biologically related offspring would need to be weighed against “other issues at stake.”

Most critically—and, hardest to achieve—the scientists envision a broad, intensive effort, that is not limited to or driven by scientists and physicians, and that goes beyond current regulatory regimes to include all aspects of society in an effort to achieve broad consensus—neither simple majority nor unanimity, but a situation in which the clear, large majority opinion exists on when and how HHGE should be countenanced.

Whether these tasks could be pulled off in five short years is something to wonder about, and even allowing planning for HHGE under these constricted circumstances raises questions about how we understand our humanity, whether embryos should be treated as raw materials in development of new treatments, and other matters that go deeper than discussions of medical, scientific, and population risks and benefits.  Were the tasks achieved under a moratorium, the authors envision that individual nations would be sovereignly free to go separate ways, with some allowing HHGE, but perhaps others not.

The editors of Nature, without taking a side about a moratorium per se, call for rules to be set, broad societal conversations to take place, research to be carefully overseen to be sure it is on a “safe and sensible” path and to identify and stop the “rogues,” and journals to refuse to publish work that transgresses limits in place at the time.

With something this big, a “presumption to forebear,” rather than a proactive drive to progress, should be the dominating sentiment.  The details are too complex to address in a few articles, a few short blog posts, a few minutes on cable news, or a few passing conversations wedged into the cracks of busy lives.  We should slow down.  We should ALL call for a moratorium. b