Vaccines: Modern Trolley Car Dilemmas

The Trolley Car dilemma is back in bioethics news. For those unfamiliar with the trolley car dilemma, you alone are responsible to operate a trolley track switch to divert an out-of-control trolley car away from five workers on one section of track only to cause the death of a lone worker on the only alternate section of track. The dilemma: someone is going to die, and you get to decide who. In a recent editorial in the June 13th New England Journal of Medicine, Dr. Lisa Rosenbaum nicely describes the utilitarian dilemma surrounding the public health risks and benefits associated with a vaccine for the dengue virus, a mosquito-borne virus that annually causes significant severe illness and death worldwide. The dengue vaccine, Dengvaxia, is a real-world trolley car dilemma. Dengvaxia presently can protect large numbers of patients from this deadly virus, but at the expense of causing severe illness and death in a much smaller number of patients, mostly children.

Dr. Rosenbaum describes our response to utilitarian thinking, correctly I think. We don’t mind utilitarian rules that negatively affect others, particularly when the rules tend to confer benefit to our group as a whole (the very definition of utilitarianism) but we resist utilitarian thinking when it threatens to affect us negatively as an individual despite overall benefit to the rest of our group. Healthy self-interest often conflicts with the utilitarian calculus that purports to determine the overall benefit to the group. In the case of Dengvaxia, if the deaths caused by the vaccine only occurred in people who would have died from the natural dengue virus anyway, there would be no problem. In other words, by golly, you all were going to die from the widespread disease anyway, and since the vaccine did save some of you from dying, there is really no new or additional loss. Net positive outcome, right?

Sadly, vaccines do not work that way. With Dengvaxia, it may be possible to create a pre-vaccine test for seropositivity for the virus. This would mean determining whether a person previously had a very mild case of the virus such that they would not suffer a catastrophic outcome from receiving the vaccine, thereby allowing them to safely receive the vaccine to prevent a more severe case of dengue in the future. Such a screening test may be possible but it would cost some unknown amount of additional money and would still not be 100% accurate. Even so, no vaccine is 100% safe.

How many lives would need to be saved and at what cost before we are satisfied with the cost/benefit ratio of Dengvaxia (or any vaccine for that matter)? Presently the World Health Organization is recommending a pre-vaccination test be developed and only vaccinate those who test positive for prior exposure. This is effectively saying that the vaccination is not only not required but not even presently recommended in endemic regions, this despite the fact that Dengvaxia clearly significantly reduces overall mortality and morbidity. If the disease were more contagious and more lethal than dengue, at what point does the vaccine, however imperfect, become mandatory? This is the ultimate trolley car switch for public health officials.

Aren’t trolley car dilemmas fun?

A safety concern with gene editing

Hat-tip to Dr. Joe Kelley for bring this to my attention…

As readers of this blog will recall, there is keen interest in exploiting recent discoveries in genetic engineering to “edit” disease-causing gene mutations and develop treatments for various diseases.  Initially, such treatments would likely use a patient’s own cells—removed from the body, edited to change the cells’ genes in a potentially therapeutic way, then return the altered cells to the patient’s bloodstream to find their way to the appropriate place and work to treat the disease.  How that would work could differ—make the cells do something they wouldn’t normally do, or make them do something better than they otherwise do (as in altering immune cells to treat cancer); or maybe make them work normally so that the normal function would replace the patient’s diseased function (as in altering blood cells for people with sickle cell anemia so that the altered cells make normal hemoglobin to replace the person’s diseased hemoglobin).

Or maybe we could even edit out a gene that causes disease (sickle cell anemia, Huntington’s disease) or increases the risk of disease (e.g., BRCA and cancer) so that future generations wouldn’t inherit it.  Or maybe we could edit genes to enhance certain health-promoting or other desirable qualities.

The recent scientific enthusiasm for gene editing is fueled by the discovery of the relatively slick and easy-to-use (if you’re a scientist, anyway) CRISPR-Cas9 system, which is a sort of immune system for bacteria but can be used to edit/alter genes in a lot of different kinds of cells.

It turns out that cells’ normal system to repair gene damage can and does thwart this, reducing the efficiency of the process.  The key component to this is something called p53, a critical protein that, if abnormal, may not do its repair job so well.  When that happens, the risk of cancer increases, often dramatically.  In cancer research, abnormal p53 is high on the list of culprits to look out for.

Two groups of scientists, one from the drug company Novartis and one from the Karolinska Institute in Sweden, have published on this.  P53’s thwarting of gene editing is particularly active in pluripotent stem cells, that are some, but not the only, candidate cells to be edited to create treatments.  These cells are also constituent cells of human embryos.  If the CRISPR-Cas9 process is used on these cells, p53 usually kills them off—unless it’s lacking or deficient, in which case it doesn’t, but also in which case it means that the altered cells could themselves become cancers, later on.

This is something that has to be monitored carefully in developing cells as medicines, so to speak, with genetic editing.  One does not want the patient to appear to be healed, only to develop a cancer, or a new cancer, later on.  One certainly would want to know the risk of that before editing an embryo—an unborn human, a future baby if placed in the right environment—to create a gene-edited human being.

Yet, as I’ve written here in the past, it appears that experimentation in heritable gene editing is pressing on.  I’ve argued, and continue to argue, that heritable human gene editing is a line that must not be crossed, that would place too much trust in the providence of the scientists/technologists who are the “actors” exerting power over fellow humans who become “subjects” in a deep sense of the term; that the risks to the subjects are undefinable; that it would enable perception of humans as “engineering projects”; that the gift of life would tend to be replaced by seeking to limit birth to “the people we want”; that the people acted upon are unable to provide consent or know what risks have been chosen for them by others, even before birth.  Rather than press ahead, we in the human race should exercise a “presumption to forbear.”

A counter argument is that, in limited cases where the genetic defect is limited and known, the disease is terrible, treatment alternatives are few or none, that the risks are worth it.  The recent papers seem to expose that line as a bit too facile.  How many embryos created (and destroyed) to develop the technique before “taking it live?”  Could we work things out in animals—monkeys, maybe?  How many generations to alter, create, and follow to be sure that a late risk—such as cancer—does not emerge?  Or maybe our animal rights sensibilities stop us from putting monkeys at such risk—maybe mice will do?

The new papers are dense science.  Frankly, I can grasp the topline story but have trouble digesting all the details.  More sophisticated readers will not be so impaired.  The news report, in the English of the general public, can be read here, the Novartis and Karolinska reports read (but not downloaded or printed) here and here, respectively.

Coming home to roost

Hoo boy.

Scientists who want to study human embryonic development have heretofore been self-limited by a 14-day rule:  embryos can only be experimented on up to 14 days of age, when they start to develop a nervous system.  This is an attempt to avoid censure for unethical experimentation on human subjects, and is seen as something of a concession, since it does not accept that human life begins at conception.

And, inevitably, they seek work-arounds.  One reported this week by Nature is the creation of human chicken hybrid embryos.  Why would someone want to do this?  (Jokes about the San Diego Chicken are NOT called for here.)

Well, apparently 14 days of embryo age is when critical organization takes place, directed by “organizer cells” that don’t appear before then.  So a group of researches did this:  they took embryonic stem cells (which itself might well require creating and destroying an embryo), and made “embryo-like structures” that had cells that either were, or were just like, these “organizer” cells.  (Apparently these structures were not capable of growing into babies, but even if not, ethical issues remain.)  Then they transplanted these cells into chicken embryos, and watched the resulting hybrid grow, and learned something about how human embryos develop.  They figure this is less of an ethical problem than trying to experiment on a fully human embryo older than 14 days, and that hybrids like this might be able to take the place of experimenting on human embryos to answer many of their questions.

Other scientists disagree with this last statement, and still think they must experiment on fully human embryos to get their answers.

Either way, at a minimum it seems that this work will require creating embryos solely for research, and there is in principle no limit on manipulating the human organism in the name of knowledge.  Work is common on some kinds of “hybrid” animals with human cells, such as immune-deficient mice who have human cells transplanted to reconstitute their immune systems.  But that work usually is done with human cells transplanted into fully-formed mice, which appears different from early, hybrid embryos.

The article describing this work says that the hybrid embryos “didn’t live long enough to hatch.”  Wonder what they would have been like if they had.

One Man’s Trash is Another Man’s DNA Treasure

Last month, investigators used big data analysis, public DNA genealogy websites and “Discarded DNA” to identify the Golden State Killer (WSJ subscription needed), an individual believed responsible for over 12 murders, greater than 50 rapes and over 100 burglaries in California between 1974 through 1986. While justice may be served if the legal case remains solid, there are some interesting bioethical issues that warrant discussion.

This blog has previously discussed the ethics of searching reportedly anonymized databases and the ability of algorithms to “unanonymize” the data (see HERE and HERE). The current technique used in the Golden State Killer case takes this one step further. Using a public genealogy database site, where individuals looking for distant relatives voluntarily share their personal DNA samples, investigators looked into these databases for partial DNA matches. A partial DNA match means that the investigators were looking for any relatives of the original suspect hoping to gain any identifying information of the relative, leading back to the original suspect. Then, using this narrower group of DNA relatives, investigators literally collected DNA samples this group of people unwittingly left behind, such as skin cells on a paper cup in the trash, so called discarded or abandoned DNA.

One man’s trash is another man’s DNA treasure.

Presently, neither the method of partial DNA search of public voluntary genealogy databases nor the collection of discarded DNA samples violates the 4th Amendment regarding unreasonable search and seizure. Neither the Health Insurance Portability and Accountability Act of 1996 (HIPAA) nor the Genetic Information Nondiscrimination Act of 2008 (GINA) provide protection as none of the data relates to health care records or employment, respectively.

Shouldn’t some law or regulation prevent my personal DNA code from becoming public, particularly if I have not taken steps to publicize it on one of the many public voluntary genealogy sites?

Since your DNA is the ultimate physical marker of personal identity, how much control do you or should you have over it? While you may wish to live a life of anonymity, your extroverted cousin who voluntarily provides her DNA to a public DNA database has just unwittingly publicized some portion of your family DNA as well as traceable personal family data that may allow others to know more about you than you desire. An energetic sleuth dumpster-diving your trash can retrieve your actual DNA. I shred my mail to avoid my social security number or other personal financial information from being obtained and used for identity theft. How do I “shred my DNA” to prevent it from being similarly recovered from my trash?

What may someone do with my DNA information obtained using these techniques. What should someone be able to do?

You could not have convinced me back in 2001 that anyone would spend money to build cars with 360 video equipment and figure out optimal routes that would eventually become what is now Google Street View. Might not someone do the same thing with trash-sourced DNA samples, perhaps Google DNA View?

We already have figured out the garbage truck routes.

More on genetic medicine

The third and final installment from The Code, a series of 3 short documentaries on the internet about the origins of genetic medicine, is entitled “Selling the Code.”  This is about genetic testing to try to predict risks of diseases, among other things.  Doctors use some of this testing in clinical care and a burgeoning amount of research.  A number of companies, such as 23andMe, will, for a (not-too-high) price, sequence your genes, or at least some of them, from a cheek swab sample you send, and then give you a report of what the results are and what they might mean.  In cases where there is a simple connection between a genetic abnormality and a disease—if you have the gene, you get the disease—the approach can be very helpful.  But it’s rarely simple.  Even for known cancer-propensity genes like BRCA1 and BRCA2, there are many variants, and what they mean clinically is far from fully known.  In fact, for most of the common disease we care about, like heart disease, diabetes, and most cancers, the story is complicated indeed.  So what to do with the information is often far from obvious, and careful genetic counseling by a physician who specializes in genetic medicine is a must.

23andMe ran afoul of FDA a couple of years ago, leading to a long process that resulted in FDA acceptance of a more limited menu of testing by the company.

And some companies will sell you “genetic information” for more trivial concerns—presuming to tell you something meaningful about what fitness regimen you should pursue, or what wine you’ll like.  Caveat emptor, I suppose, although the risks are low for some of this.

AND—companies like 23andMe keep anonymized data bases of the genetic information they get for and from their customers, and sell that information to drug companies to support the latters’ research.  An individual can’t be identified in the process (at least, not readily, see my January 2013 post about “DNA research and (non)anonymity”) but the data in the aggregate is valuable to the genetic sequencing company.

These kinds of concerns—particularly what to do with an individual’s information, but also the usefulness of having genetic data on a large group of people to understand disease and help discover new treatments—are germane to an ongoing project of the Hastings Center to assess the implications of genetic testing of the whole genomes of large numbers of babies, to screen for any of several dozen genetic diseases.   Again, most of the babies will be perfectly healthy, and the yield from screening for rare conditions is low.  But people arguably have a right to know about themselves, and parents to know about their newborns.  Yet still, to what end will we use information that we don’t fully understand?  Read a good Los Angeles Times article, that overlaps some of the points in The Code’s video, and provides other useful information in quick-and-easy form, here.

Finally, I was gratified to read that a project to synthesize an entire human genome in the laboratory is being scaled back, at least for now.  Apparently, they can’t raise enough money.  I bet would-be investors aren’t convinced they could own the results and guarantee a return on their money.  I fretted about this in May of 2016 and again in July of the same year.  I encourage readers to click through and read those, as well as the concerns raised by Drew Endy of Stanford and Laurie Zoloth of Northwestern, who criticized both the effort in concept and the closed-door, invitation-only meeting at Harvard to plan it.

That was two full years ago.  A lot is going on under our noses.

Deep Brain Stimulation: the New Mood Modifier?

A patient of mine recently had a deep brain stimulator (DBS) placed to reduce her severe tremors. The stimulator has worked very well to almost eliminate her tremor but has resulted in a side effect that causes her personality to be more impulsive. Her husband notices this more than the patient. Both agree that the reduction in the tremor outweigh the change in her personality though her husband has indicated that her personality change has been more than he imagined when they were initially considering the surgery. He has commented that if her new impulsivity were any stronger, he might be inclined to reverse the process. As one might imagine, the patient sees no problem with the impulsivity and remains extremely pleased with her newfound lack of tremor.

I share the preceding clinical vignette as backdrop to a recent article in Nature describing research funded by the US military’s research agency, The Defense Advance Research Projects Agency (DARPA – the same group that sponsored the early development of the Internet), where they are looking into modifying neural activity with the goal to alter mood, and eventually cure mental health disorders. Using patients that already have DBS stimulators in place for treatment of epilepsy or movement disorders such as Parkinson’s Disease, scientists are developing algorithms that “decode” a person’s changing mood. Edward Chang, a neuroscientist at the University of California, San Francisco (UCSF) believe they have a preliminary “mood map” and further believe that they can use the DBS stimulators to stimulate the brain and modify the local brain activity to alter the patient’s mood. The UCSF group describes this as a “closed-loop” (using the stimulator to both receive and then stimulate the brain). Chang further admits that they have already “tested some closed-loop stimulation in people, but declined to provide details because the work is preliminary.”

If scientists are on the verge of changing your mood, might they not also be on the verge of creating your urges? Professor Laura Cabrera, a neuroethicist, and Professor Jennifer Carter-Johnson, a lawyer, both at Michigan State University, argue we need to begin worrying about that possibility and further that we need to begin considering who is responsible for those new urges, particularly if those urges result in actions that cause harm against other people. The article does a masterful job of the ethical-legal ramifications of just what happens when your DBS causes you to swerve your car into a crowd of people – Is it your fault or did your DBS make you do it?

Returning to my patient, the alteration in her behavior is an unwanted but not a completely surprising result of her DBS to treat her movement disorder. Despite the informed consent, her husband was not prepared for the change in her personality. The treatment to correct my patient’s movement disorder (a good thing) has altered my patient’s personality (a not-so-good thing). My patient’s husband might even argue that his wife is almost a different person post DBS.

When we modify the brain in these experiments, we are intentionally modifying behavior but also risk modifying the person’s actual identity – the “who we are”. As the DARPA experiments proceed and cascade into spin-off research arms, we need to be very clear with patient-subjects in current and future informed consents that the patient who signs the consent may end up very different from the patient who completes the experiment. How much difference in behavior or urges should we tolerate? Could the changes be significant enough that they are considered a new person by their family and friends?

And if that is true, who should consent to the experiment?

New short videos on genetic topics

This week, an email from the Hastings Center promoted The Code, a series of 3 short documentaries on the internet about the origins of genetic medicine.  The three are being released one week at a time.  The first, released this week, briefly (12 minutes) reviews the determination, or sequencing, of the entire human genome, a project conducted in the 1990’s, and completed in 2000, by two labs—one in the government, one private—that initially worked in competition but ended working in collaboration.

It’s a nice review of the key points:

  • A person’s entire genome can be read fast—in a few hours—by an automatic process, at an ever-decreasing cost that now is on the order of $1000.
  • We still are FAR from understanding what the genetic code means for human disease. The number of cases in which there is a reasonably direct link between a single, or a small number, of genetic abnormalities and a gene, in a way that allows us to predict risk of disease or be able to make an enlightened selection of treatment, is still small.
  • With more reading of peoples’ genomes, and more computing power, what amounts to a massive pattern-recognition problem will likely yield more solutions that can be practically exploited to the benefit of human health. Some entities are collecting more peoples’ genomes in a database, for ongoing analysis and, at first, hypothesis generation—that is, “maybe this is a lead that could be acted on for benefit, after the proper follow-on research.”
  • But for now, we should not get carried away—”personalized medicine” is not generally “ready for prime time,” but useful only in a few specific situations, and often most appropriately the subject of new medical research. And one should be careful to get well-informed advice from a medical professional who is expert in genetic medicine, and not over-interpret what a commercial entity might be advising.  (But that, about which this blog has commented in the past, is for another time and another posting.)

This first video does not get into ethical issues—e.g., of justice, privacy, and the like.  But it is a good, quick, engaging overview suitable for the general public.  (BTW, I hate calling non-scientists and non-physicians “lay people,” a term I think best reserved to distinguish most of us from the clergy, and the abuse of which just reinforces the notion of medical scientists as a sort of “priesthood.”)

The second video in the series, due out next week, is on gene editing, and the third, the week after, will address companies that are willing to sequence your genes and tell you, for a price, what they think it might all mean.

The Ethics of Pet Cloning

Anyone who passes through a grocery checkout line on a weekly basis is unable to remain ignorant of the latest thoughts and insights from Hollywood. With ethical pronouncements from Hollywood, I usually find it reliable to point my moral compass in the opposite direction, at least until I have time to further evaluate the issue. Such was the case with a recent National Enquirer scoop that Barbara Streisand has cloned her now deceased Coton du Tulear dog Samantha, producing two offspring, Miss Violet and Miss Scarlett. The fact that she cloned her pet was interesting in its own right, as I did not realize this process was commercially available to the general (wealthy) public. Perhaps more interesting was the backlash Ms Streisand has experienced from Twitter (generally) and PETA (specifically) largely on ethical grounds. More on this in a moment. The Streisand scoop actually should be credited to a Variety interview and the initial ethical discussion to both the New York Times and Fox News (offering, no surprise, differing vantage points)

Sone of Streisand’s harshest criticism came from Twitter under the hashtag #adoptdontclone. One argument against the pet cloning process was that it was unjust; given the fact that only rich people could afford the price tag, which according the NYT link above ranged around $50,000. Another argument against the process was to remind Ms. Streisand that Miss Violet and Miss Scarlett were not the same as the original Samantha, even though they might look or even act in a manner that might remind Streisand of her dear departed. These arguments touched on the very themes of genetic determinism vs. environmental nurture, admittedly in a rudimentary way. The PETA arguments described the pain and suffering they claimed that the female dogs experienced during the required egg harvesting needed for the cloning process to be successful, arguments eerily similar to risks women experience related to egg harvesting for some IVF procedures.

The strongest or, at least, most popular argument leveled at Ms. Streisand was that cloning her pet eliminated the possibility that she might adopt an already existing puppy, who very much needed a loving pet owner to provide that puppy a better future. While no one presently is making a similar argument against human cloning in favor of human adoption (since human cloning is presently illegal), similar arguments have been made with IVF vs. adoption.

The point of all this was to appreciate some of the ethical arguments by the lay press presently used against pet cloning by Hollywood’s elite and wonder whether, if and when human cloning is accessible to the general (wealthy) public in the future, similar arguments will resurface to protect the humans involved then with the same loud voice used to protect the animals now.

Toward true public engagement about gene editing

The March 22, 2018 edition of Nature includes two thoughtful, helpful commentaries about improving the public dialogue around “bleeding edge” biotechnologies.  In this case, the example is gene editing, of which one commentator, Simon Burall from the U.K., says, “Like artificial intelligence, gene editing could radically alter almost every domain of life.”  Burall’s piece, “Don’t wait for an outcry about gene editing,” can be found here.  The other commentary, “A global observatory for gene editing,” by Harvard’s Sheila Jasanoff and J. Benjamin Hurlbut from Arizona State, can be found here, and an umbrella editorial from the editors of Nature is here.  All are open-access and all are worth reading by any citizen who would like to be informed at even a general level about the ethical discussions of biotechnology.

The three share this tone: more inclusiveness, more humility on the part of scientists, and willingness to have difficult conversations are called for—and have been generally lacking in past efforts to engage the public in discussion of the implications of new biotechnologies.  In the view of Jasanoff and Hurlbut, even the much-admired 1975 Asilomar conference that established boundaries on recombinant DNA research and its applications, was too narrow, focusing on technically-definable risks and benefits but not taking time to reflect more deeply on the ultimate ramifications of what the scientists were doing.  The experts dominate, and lecture—gently, but clearly—the “laity.”  This can create a sort of foregone-conclusion effect: getting people comfortable with the research agenda and the scientists’ and technologists’ (including industry players’) goals is the true point.  The possibility that some work simply should not be pursued for a while may scarcely be expressed, much less heeded.  As Hans Jonas said in a reflection about Asilomar, “Scientific inquiry demands untrammeled freedom for itself.”

Burall, Jasanoff, and Hurlbut seem to be saying, repent from that, as it were.  Don’t just have a panel of a dozen scientists or so meet for a single seminar or webinar with a dozen or so non-scientists (with, I might add, the token clergyperson).  Create a clearinghouse for a wide range of views on what gene editing really might mean, and how humans should respond.  Open the dialogue to a large number, not just a few, non-scientists from a wide range of perspectives.  Pay attention to cultures other than the developed West—especially the global South.  Perhaps start with seminars that are cooperatively organized by several groups representing different interests or stakeholders, but don’t stop there—create a platform for many, many people to weigh in.  And so on.

They don’t suggest it will be easy.  And we do have a sort of clearinghouse already—I call it the Internet.  And we’d want to be sure—contra John Rawls—that viewpoints (yes, I’m thinking of God-centered perspectives) are not disqualified from the outset as violating the terms of the discussion.  And, perhaps most importantly, what threshold of public awareness/understanding/agreement would be insisted upon to ground public policy?  Surely a simple popular majority would be suspect, but unanimity—achievable in smaller groups, with difficulty—would be impossible.  And concerns about “fake news” or populist tendencies run amok (the “angry villagers”) would be unavoidable.

But, as Jasanoff and Hurlbut say, “In current bioethical debates, there is a tendency to fall back on the framings that those at the frontiers of research find most straightforward and digestible…[debate must not be limited by] the premise that, until the technical capability does exist, it is not necessary to address difficult questions about whether [some] interventions are desirable…Profound and long-standing traditions of moral reflection risk being excluded when they do not conform to Western ideas of academic bioethics.”

Bingo and amen.  How to make it happen, I am not sure.  Jasanoff and Hurlbut say they are trying to get beyond binary arguments about the permissibility or impermissibility of germline genome editing, for example.  Still, I don’t see how the “cosmopolitan” public reflection they advocate can go on without agreeing on something like a fairly firm moratorium—a provisional “presumption to forebear,” as I like to put it—while the conversation proceeds.  And hey, we’re the Anglosphere.  We’re dynamic, innovative, progressive, pragmatic, visionary.  We don’t do moratoria.   Moratoria are for those Continental European fraidy-cats.  Then again, these writers are seeking a truly global discussion.  And past agreement by assembled nation-states appears to have at least slowed down things like chemical and biological munitions (recent events in Syria notwithstanding).

These authors are doing us a service with their reflections.  Read their articles, give them a careful hearing—and note that their email addresses are provided at the end.  Maybe I’ll write to them.

Resources regarding ethics of gene editing

Recently, two resources have become available regarding gene editing and the issues raised by it.

First, the National Academies of Science, Engineering, and Medicine have made available an archive of its February 22 webinar about human gene editing.  The home page for the Academies’ human gene-editing initiative is here.  A link to the archived webinar is here.  The slides can also just be viewed here.

Second, Issue 1 of Volume 24 of the journal The New Bioethics is dedicated to human gene editing.  The entire issue, or individual articles from it, are available online for purchase, or for viewing if you have access through an academic institution.  Article titles deal with, for example, differentiating gene editing from mitochondrial transfer, comparing ethical issues with gene editing vs embryo selection, and “selecting versus modifying” to deal with disabilities.

I have not been through these materials in any detail, yet.  The webinar looks a smidge promotional, co-sponsored as it was by the Biotechnology Industry Organization (BIO).  But it also recommends the Academies’ report on the status of human gene editing, and summarizes key recommendations, which include limiting efforts (at least for the present!) to editing “somatic,” or, if you will, “adult” cells to make them into cellular therapies for recognized diseases.  This is well within the existing ethical and regulatory regime governing clinical research and treatment development, as opposed to the deeply problematic prospect of heritable gene editing, or attempts to edit genes for human enhancement, both of which the report and the webinar (at least the slides) counsel that we NOT rush into.  The New Bioethics articles look thoughtful and worth reviewing, which I hope to do (and comment on) in the near future.