Bioethics & “Three Identical Strangers”

By Neil Skjoldal

I recently had the opportunity to watch the 2018 documentary Three Identical Strangers, which tells the story of triplets Bobby Shafran, Eddy Galland, and David Kellman.  They were separated shortly after birth in the 1960s and adopted by three different families through the Louise Wise adoption agency in New York City.  The way they happen to find out about each other in 1980 is fascinating.  It created a media sensation at the time, including an appearance on The Phil Donahue Show.

The documentary starts by sharing their thrill of discovery, which included the many similarities that the brothers have, even though they spent the first 19 years of life apart.  However, it eventually moves to some of the larger and darker questions that lingered for each of the adoptive families—the biggest of which was, “Why weren’t we told that there were siblings?”  And as you might suspect, the agency representatives did not provide many helpful answers.  The parents’ feelings of anger and bewilderment resonated with me as an adoptive parent.

Eventually the brothers came to find out that they were part of a “twins study” conducted by noted psychologist, Peter Neubauer.   The study involved the brothers being interviewed and filmed individually every year through the first few years of their lives, with them not knowing that their brothers even existed.  Their parents were told it was a study of adopted children, not a study of twins.

The documentary leaves little doubt where it stands on the ethics of this matter.  From some of those interviewed, it appears that the purpose of the study was to address the classic “nature vs. nurture” question.  However, the harm done to these brothers (and the others who were unknowingly involved in the study), making them feel like ‘lab rats,’ undermines any positive value that the study may have had.  That Neubauer’s research remains sealed at Yale until 2066 adds fuel to the fire that something unethical was done.

In a blog post on www.psychologytoday.com, Dr. Leon Hoffmann asks whether it is reasonable to expect researchers of previous generations to follow our contemporary standards.  He asserts that both the original researchers and the producers of the documentary are guilty of self-deception.  This is a point worth considering as we look back; however, this case is from the 1960s and those impacted are still very much alive.

It is difficult for me to disagree with the assessment of reviewer Neta Alexander of www.haaretz.com:  “’Three Identical Strangers’ is thus a faithful representation of the spirit of the times. It’s about the way in which the authorities and those with power – headed by a charismatic and respected psychologist – abuse their powers in the name of science.”  Three Identical Strangers stands as a timely reminder that there should be safeguards and limits to research.

Public input into gene-editing decisions

Lyme disease is caused by a type of bacteria that lives in mice, which are considered a “reservoir” for the disease-causing agent.  Ticks bite the mice, pick up the bacteria, and then infect people when they bite them.  (Ticks are called the “vector” for the disease.)

If mice were immune to the bacteria, their immune systems would destroy them, and there’s be no reservoir, and no Lyme disease.  If scientists genetically engineered mice to make them immune—for example, by editing their genes—they could make progress toward that goal.  But to work, the mouse population would have to be predominantly made of bacteria-immune mice.  That could be accomplished by using “gene drive,” an approach that would make the altered gene spread preferentially and rapidly in the population.  However, doing that could alter the environment in unpredictable ways.

Because of the risks, scientists on the “Mice Against Ticks” project are determined that even if they succeed in genetically altering mice as suggested, they will not release those mice into the wild without full public awareness and approval.  They are holding public meetings—specifically, in Martha’s Vineyard and Nantucket—well in advance of the project coming to full fruition.  And they are trying to figure out, with the public, what level of communication and acceptance constitutes public approval.

Similarly, scientists in New Zealand would like to use a form of gene drive to greatly reduce the population of rats, possums, and other destructive predators that are decimating the environment.  And their public deliberations include seeking advice and, before taking action, buy-in from a network of Maori leaders.  Those conversations are so sensitive that the Maori objected when the scientists published a “what-if” type of article discussing the issues raised by the technology.  Among the concerns: some readers got the impression that gene editing of the animals was imminent, not hypothetical, as it still is.  Some of the news coverage of the Nuffield Council’s recent deliberations about the potential acceptability of heritable human gene editing seemed, likewise, to create the impression that the birth of the first gene-edited human is upon us—which it is not, not quite yet.

The public discussions above are two commendable moves toward true public involvement in decision-making about gene editing.  They were described in a recent Wall Street Journal article.  If you have subscription access, by all means read it.

A safety concern with gene editing

BY JON HOLMLUND

Hat-tip to Dr. Joe Kelley for bring this to my attention…

As readers of this blog will recall, there is keen interest in exploiting recent discoveries in genetic engineering to “edit” disease-causing gene mutations and develop treatments for various diseases.  Initially, such treatments would likely use a patient’s own cells—removed from the body, edited to change the cells’ genes in a potentially therapeutic way, then return the altered cells to the patient’s bloodstream to find their way to the appropriate place and work to treat the disease.  How that would work could differ—make the cells do something they wouldn’t normally do, or make them do something better than they otherwise do (as in altering immune cells to treat cancer); or maybe make them work normally so that the normal function would replace the patient’s diseased function (as in altering blood cells for people with sickle cell anemia so that the altered cells make normal hemoglobin to replace the person’s diseased hemoglobin).

Or maybe we could even edit out a gene that causes disease (sickle cell anemia, Huntington’s disease) or increases the risk of disease (e.g., BRCA and cancer) so that future generations wouldn’t inherit it.  Or maybe we could edit genes to enhance certain health-promoting or other desirable qualities.

The recent scientific enthusiasm for gene editing is fueled by the discovery of the relatively slick and easy-to-use (if you’re a scientist, anyway) CRISPR-Cas9 system, which is a sort of immune system for bacteria but can be used to edit/alter genes in a lot of different kinds of cells.

It turns out that cells’ normal system to repair gene damage can and does thwart this, reducing the efficiency of the process.  The key component to this is something called p53, a critical protein that, if abnormal, may not do its repair job so well.  When that happens, the risk of cancer increases, often dramatically.  In cancer research, abnormal p53 is high on the list of culprits to look out for.

Two groups of scientists, one from the drug company Novartis and one from the Karolinska Institute in Sweden, have published on this.  P53’s thwarting of gene editing is particularly active in pluripotent stem cells, that are some, but not the only, candidate cells to be edited to create treatments.  These cells are also constituent cells of human embryos.  If the CRISPR-Cas9 process is used on these cells, p53 usually kills them off—unless it’s lacking or deficient, in which case it doesn’t, but also in which case it means that the altered cells could themselves become cancers, later on.

This is something that has to be monitored carefully in developing cells as medicines, so to speak, with genetic editing.  One does not want the patient to appear to be healed, only to develop a cancer, or a new cancer, later on.  One certainly would want to know the risk of that before editing an embryo—an unborn human, a future baby if placed in the right environment—to create a gene-edited human being.

Yet, as I’ve written here in the past, it appears that experimentation in heritable gene editing is pressing on.  I’ve argued, and continue to argue, that heritable human gene editing is a line that must not be crossed, that would place too much trust in the providence of the scientists/technologists who are the “actors” exerting power over fellow humans who become “subjects” in a deep sense of the term; that the risks to the subjects are undefinable; that it would enable perception of humans as “engineering projects”; that the gift of life would tend to be replaced by seeking to limit birth to “the people we want”; that the people acted upon are unable to provide consent or know what risks have been chosen for them by others, even before birth.  Rather than press ahead, we in the human race should exercise a “presumption to forbear.”

A counter argument is that, in limited cases where the genetic defect is limited and known, the disease is terrible, treatment alternatives are few or none, that the risks are worth it.  The recent papers seem to expose that line as a bit too facile.  How many embryos created (and destroyed) to develop the technique before “taking it live?”  Could we work things out in animals—monkeys, maybe?  How many generations to alter, create, and follow to be sure that a late risk—such as cancer—does not emerge?  Or maybe our animal rights sensibilities stop us from putting monkeys at such risk—maybe mice will do?

The new papers are dense science.  Frankly, I can grasp the topline story but have trouble digesting all the details.  More sophisticated readers will not be so impaired.  The news report, in the English of the general public, can be read here, the Novartis and Karolinska reports read (but not downloaded or printed) here and here, respectively.

More on genetic medicine

The third and final installment from The Code, a series of 3 short documentaries on the internet about the origins of genetic medicine, is entitled “Selling the Code.”  This is about genetic testing to try to predict risks of diseases, among other things.  Doctors use some of this testing in clinical care and a burgeoning amount of research.  A number of companies, such as 23andMe, will, for a (not-too-high) price, sequence your genes, or at least some of them, from a cheek swab sample you send, and then give you a report of what the results are and what they might mean.  In cases where there is a simple connection between a genetic abnormality and a disease—if you have the gene, you get the disease—the approach can be very helpful.  But it’s rarely simple.  Even for known cancer-propensity genes like BRCA1 and BRCA2, there are many variants, and what they mean clinically is far from fully known.  In fact, for most of the common disease we care about, like heart disease, diabetes, and most cancers, the story is complicated indeed.  So what to do with the information is often far from obvious, and careful genetic counseling by a physician who specializes in genetic medicine is a must.

23andMe ran afoul of FDA a couple of years ago, leading to a long process that resulted in FDA acceptance of a more limited menu of testing by the company.

And some companies will sell you “genetic information” for more trivial concerns—presuming to tell you something meaningful about what fitness regimen you should pursue, or what wine you’ll like.  Caveat emptor, I suppose, although the risks are low for some of this.

AND—companies like 23andMe keep anonymized data bases of the genetic information they get for and from their customers, and sell that information to drug companies to support the latters’ research.  An individual can’t be identified in the process (at least, not readily, see my January 2013 post about “DNA research and (non)anonymity”) but the data in the aggregate is valuable to the genetic sequencing company.

These kinds of concerns—particularly what to do with an individual’s information, but also the usefulness of having genetic data on a large group of people to understand disease and help discover new treatments—are germane to an ongoing project of the Hastings Center to assess the implications of genetic testing of the whole genomes of large numbers of babies, to screen for any of several dozen genetic diseases.   Again, most of the babies will be perfectly healthy, and the yield from screening for rare conditions is low.  But people arguably have a right to know about themselves, and parents to know about their newborns.  Yet still, to what end will we use information that we don’t fully understand?  Read a good Los Angeles Times article, that overlaps some of the points in The Code’s video, and provides other useful information in quick-and-easy form, here.

Finally, I was gratified to read that a project to synthesize an entire human genome in the laboratory is being scaled back, at least for now.  Apparently, they can’t raise enough money.  I bet would-be investors aren’t convinced they could own the results and guarantee a return on their money.  I fretted about this in May of 2016 and again in July of the same year.  I encourage readers to click through and read those, as well as the concerns raised by Drew Endy of Stanford and Laurie Zoloth of Northwestern, who criticized both the effort in concept and the closed-door, invitation-only meeting at Harvard to plan it.

That was two full years ago.  A lot is going on under our noses.

Belgian Euthanasia: Volunteers No Longer Necessary?

A recent resignation letter by one member of Belgium’s Euthanasia Commission suggests the slippery slope of who meets the criteria for legal euthanasia is becoming even more slippery. Dr. Ledo Vanopdenbosch sent his resignation letter to members of the Belgian Parliament who oversee the commission. His concern was with one of the main requirements of the law, which demands that the individual patient formally request euthanasia. Vanopdenbosch claims euthanasia occurred on a psychiatric patient without his or her request. His resignation has generated substantial concern not only because Vanopdenbosch is a committee member but also because he is considered a strong advocate of euthanasia. Here is the AP article in Voice of America with the details.

One of the main tasks of the Belgium Euthanasia Commission is to review every euthanasia case to make sure each case meets the legal criteria necessary for euthanasia. Any case in doubt is referred to the public prosecutor’s office. It is perhaps telling that in the last 15 years since legalization of euthanasia in Belgium, over 10,000 individuals have been euthanized but only one case has been referred to prosecutors by the commission with the concern that it may have been performed illegally. Vanopdenbosch argues that the commission is acting in place of the courts, a potential conflict of interest given that those on the commission are generally considered strong supporters of euthanasia. In addition to the slippery slope metaphor used earlier, one might also add that the foxes are guarding the henhouse.

An internal review of this particular case resulted in the committee claiming that what really happened was an accidental death related to palliative care rather than actual involuntary or non-voluntary euthanasia, as is claimed by Vanopdenbosch. The general population will never know, as commission protocol and privacy concerns prevent the details of the case from ever reaching the light of day. In absence of further details, one wonders whether the alleged palliative care for the unknown psychiatric condition was formally requested by an otherwise competent patient or just provided absent his or her formal consent but “in his or her best interest” by the patient’s physician or caregivers.

It is presently unknown whether or not Dr. Vanopdenbosch’s resignation will result in any changes in the structure, function or transparancy of Belgium’s Euthanasia Committee. At the very least, one would expect to see an increase in referrals to the public prosecutor’s office for legal oversight. It is simply unbelievable that the committee has only encountered one case out of 10,000 cases that they found sufficiently suspect to refer to prosecutors for legal review. Perhaps more importantly, I want to believe that even those supporting euthanasia would be against all forms of non-voluntary euthanasia, particularly involuntary euthanasia. Sadly, I am naive. In our post-modern world, how can any death be a “good death” unless, at the very least, the competent patient in question so stipulates?

(For an excellent recent YouTube interview containing a brief history of euthanasia, please see this link of an interview with Dr. Richard Weikart, Professor of History at California State University, Stanislaus. Some highlights: at 10:40 where he touches on Belgium and psychiatric euthanasia, at 19:00 where he discusses the slippery slope argument, and at 21:30 regarding non-voluntary euthanasia)

Parkland & Bioethics

I have lived in South Florida over 20 years now, and I do not remember anything grabbing and holding our community’s consciousness more than the February 14 shooting at the Stoneman Douglas High School in Parkland, Florida (in Broward County).  In its aftermath, the more we hear about the events of that day, the more alarming it becomes.  This is the sort of tragedy that haunts children in profound ways.  I have had conversations with my two teenage daughters about the relative safety of their schools, and what would happen if the formerly unthinkable occurred.

It’s hard to keep track of all the news coverage.  Certainly, there are many on all sides of the gun issue that engage in sensationalism and scare tactics.  Sadly, the voice of the so-called “reasonable middle” often is silenced by the loud voices on the fringes.  I sincerely, but mistakenly, thought that after the horrific shooting at Newtown, Connecticut in 2012 (with the deaths of 20 first graders and 6 adults) leaders would take meaningful action.

Is gun violence a bioethics issue? A research letter published in JAMA in early 2017 says as much.  After citing several powerful statistics, the authors write: “Compared with other leading causes of death, gun violence was associated with less funding and fewer publications than predicted based on mortality rate.”  The debate is over the impact of the Dickey Amendment, passed in 1996, which states that “none of the funds made available for injury prevention and control at the Centers for Disease Control and Prevention [CDC] may be used to advocate or promote gun control.”

Some Republicans in Congress say that the CDC is allowed to do research into gun violence even under the Dickey Amendment, but the evidence presented by Stark and Shah suggests that it is not being done.   Other Republicans have stated that the Dickey Amendment should be revisited. According to www.thehill.com, Rep. Bob Goodlatte from Virginia said that it would be appropriate for lawmakers to review the policy. He is quoted as saying, “I don’t think it’s inappropriate — particularly if the original author of that says it should be examined — to take a look at it . . . to see if there is a way to do that, to promote the cause, the core pursuit of the Centers for Disease Control, which is to prevent disease, not to address issues related to things that happen because someone has a disease like mental illness.”

Clearly, the subject of guns is controversial. Would CDC research into gun violence help affirm human dignity? Or, would the research be too politically biased to be of any value?   Might there be some valuable data gathered that could help address this most tragic of issues? This is a conversation worth having.

 

Reviewing the ethics of paying human research subjects

Sometimes it is both necessary and proper to pay a person to participate in a clinical trial, of a drug or some other medical intervention, or a data-collection study, or something else that involves people.  An article in this week’s New England Journal of Medicine reviews many of the relevant ethical issues.

A link to the article is here.  Correction to initial post:  subscription or purchase does appear required.

Why pay somebody to be in a trial?  The main reasons are to reimburse them for unavoidable expenses, to compensate them for time that would not otherwise be required in the course of standard medical care or normal life, and, indeed, to get them to participate in the first place.  In cancer medicine, where I’ve worked, the subjects are cancer patients who are generally not paid to participate; they usually are willing to do so in the hope of possible benefit, plus, often, a sense of altruism.  But most drugs have their first human testing in healthy volunteers, to begin to identify potential safety concerns and understand how, and how rapidly, the drug is eliminated from the body.  In those cases, the research subjects are almost always paid, sometimes substantially.

Such payments are not necessarily unethical, as long as they are not too big.  If they are, then they could create an undue influence to participate.  That would upset the balance of benefits and risks and compromise true informed consent.  By well-accepted ethical standards for research on human subjects—many of which are codified in regulation—the risks to human subjects must not be excessive, must be avoided or mitigated to the extent reasonably possible and commensurate with the goals of the research, and must not exceed the foreseeable benefits of the research, either to the individual subject or to society overall (e.g., in the form of important medical knowledge), or both.

Payment to a subject is not considered a benefit in and of itself, but should be “neutral” to the benefit/risk assessment.

There’s no hard and fast rule about paying subjects—no single standard “fee schedule,” so to speak.  Rather, each ethics board reviewing a study must also review and approve the amount and timing of payments to subjects.  Again, such payments should be high enough to respect the subject’s contribution to the research, but not too high so as to give them incentive to participate when maybe they should not.  Also, it’s a general principle that payment should be in installments; generally, no more than 10-15% of the total should be held back to the very end of the study.  Why this last point?  Because it’s also a principle that subjects can opt out of a study at any time, but if they think “I have to stay in to the bitter end to get paid,” that could pressure them too much.

Note, BTW, that such pressure is not the same as coercion, which by definition involves a threat, and does not apply to this payment question.

Also, payments must be appropriate so that subjects don’t get a wrong idea about the potential value or efficacy of an experimental drug, or that they might be induced to try to be in more than one study at once.  You might be surprised how significant that last risk is.  In my past IRB work, we just to worry about “professional subjects” who make some level of living by going from one research study to another.  More than one at once means getting two or more drugs at once that probably ought not to be combined, willy-nilly.

And of course, the potential for economic exploitation of low-income individuals must also be considered and respected.

The NEJM article really doesn’t break new ground but is a helpful review for those interested in essential research ethics.  The FDA has also provided guidance, which can be reviewed here.

Stem Cell Clinics & the FDA

When any business over-promises and under-delivers, it is well on its way to failure.   Does this principle also hold true in the world of stem-cells?  In the last few months the promise of stem cell treatment has met the reality of government oversight.

Does the government have the responsibility to rein in the larger-than-life claims of stem cell treatment clinics? In a letter dated August 24, 2017 to US Stem Cell of Sunrise, Florida, the FDA cited at least 14 failures relating to the facility’s compliance with federal regulations. It is a powerful letter that makes one wonder what is happening in some of these clinics throughout the country. US Stem Cell responded quickly, re-asserting their claim that they were simply treating consenting patients with their own cells and not subject to the same sorts of regulations that drug manufacturers are.

Is there a place for government oversight over stem cell clinics? At the very least, it could easily be argued that some of their claims are over-the-top and should be subject to false advertising laws.  Michael Joyce makes this point clearly.  He cites the concerns of stem cell researchers Paul Knoepfler and Jeanne Loring. Dr Loring puts it bluntly: “[Stem cell clinics] don’t want to talk to real scientists . . . Because 99 percent of them know they’re pulling the wool over people’s eyes. This is marketing, not science.”

Joyce makes an important point. There are real people with real physical problems who are turning to stem cell clinics as a last resort. If one buys a faulty product from the mall, one has the opportunity to return it for a refund. However, if one receives a faulty medical procedure, how can they be repaid for their loss? In these cases, shouldn’t stem cell clinics be held accountable for misleading the public?

The New York Times describes what happened to some unfortunate individuals who suffered at the hands of US Stem Cell: “The women had macular degeneration, an eye disease that causes vision loss, and they paid $5,000 each to receive stem-cell injections in 2015 . . . Staff members there used liposuction to suck fat out of the women’s bellies, and then extracted stem cells from the fat to inject into the women’s eyes.” They “suffered severe, permanent eye damage…”

Desperate people will try desperate things in order to receive desirable results. It is my opinion that the FDA is acting properly by providing at least a level of protection from those who would exploit the desperation of suffering people.

Selective data collection – what do we know about the risks of IVF?

A recent article in Newsweek reports on a physician, Dr. Jennifer Snyder, who is calling for the formation of a registry of egg donors to help determine the risks to women who “donate” eggs to other women undergoing IVF for monetary compensation. Her motivation in calling for this registry was the death of her daughter at age 31 from cancer after donated eggs on three occasions. She points out that egg donors are commonly told that there are no known long-term risks of egg donation, but that the reason that there are no known long-term risks is that the risks of egg donation have never been studied.

The article reports that Alan Penzias, chair of the practice committee at the American Society of Reproductive Medicine, agrees that such a registry is needed, and states that “national reporting on IVF, including data on both mothers and babies, is required by law.” It is good that this representative of those who practice reproductive medicine is in favor of a registry to assess the risks of egg donation, but there is a problem with his statement about the current reporting that is done on IVF in the United States.

That reporting on IVF is done under the authority of the CDC by the National Assisted Reproductive Technology Surveillance System. According to their National ART Surveillance website what they measure to comply with the Fertility Clinic Success Rate and Certification Act is data about patient demographics, patient stature: medical history, parental infertility diagnosis, clinical parameters of the ART procedure, and information regarding resultant pregnancies and births. The outcomes data is limited to information about the percentage of IVF cycles that achieve pregnancy and achieve live birth and information on how many of those pregnancies are single or multiple gestations and how many are delivered prematurely or at term. No data is collected on the complications or ill effects that women who undergo IVF may experience, and no data is collected on either birth defects or any other adverse consequences other than prematurity, birth weight, and plurality for the infants born by way of IVF.

Dr. Snyder’s call for a registry for data on adverse effects experienced by women who donate eggs is absolutely necessary to be able to give women the information needed to be able to make an informed decision about being an egg donor. There is an urgent need for the same type of registry of adverse outcomes for women who undergo IVF and the children produced by IVF. It is inexcusable to expect women to consent to these procedures without knowing the risks because those who perform the procedures have failed to collect data about those risks.

Is Obfuscation Ever Helpful in Science or Ethics?

Obfuscation and science would seem to be polar opposites. The scientific method hinges upon correctly identifying what one starts with, making a single known alteration in that starting point, and then accurately determining what one ends up with. Scientific knowledge results from this process. Accidental obfuscation in that three-step process necessarily limits the knowledge that could potentially be gleaned from the method. Peer review normally identifies and corrects any obfuscation. That is its job. Such peer review can be ruthless in the case of intentional obfuscation. It should be. There is never any place for intentionally misrepresenting the starting point, the methods or the results.

Until now?

In an excellent article in Technology Review, Antonio Regalado describes the current status of research where human embryonic stem cells “can be coaxed to self-assemble into structures resembling human embryos.” The gist of the article is that the scientists involved are excited and amazed by the stem cells’ ability to self-organize into structures that closely resemble many features of the human embryo. Perhaps more importantly, per Regalado:

“…research on real human embryos is dogged by abortion politics, restricted by funding laws, and limited to supplies from IVF clinics. Now, by growing embryoids instead, scientists see a way around such limits. They are already unleashing the full suite of modern laboratory tools—gene editing, optogenetics, high-speed microscopes—in ways that let them repeat an experiment hundreds of times or, with genetic wizardry, ask a thousand questions at once.”

This blog has reported on Synthetic Human Entities with Embryo-like Features (SHEEFs) before (see HERE and HERE for starters). The problem from a bioethical standpoint is this: is what we are experimenting upon human, and thus deserving protections as to the type of research permitted that we presently give to other human embryos? Answering that ethical question honestly and openly seems to be a necessary starting point.

Enter the obfuscation. Consider just the following three comments from some of the researchers in the article:

When the team published its findings in early August, they went mostly unnoticed. That is perhaps because the scientists carefully picked their words, straining to avoid comparisons to embryos. [One researcher] even took to using the term ‘asymmetric cyst’ to describe the [amniotic cavity-like structure] that had so surprised the team. “We have to be careful using the term synthetic human embryo, because some people are not happy about it,” says [University of Michigan professor and lab director Jianping] Fu.

“I think that they should design experiments to focus on specific questions, and not model everything,” says Insoo Hyun, professor and ethicist at Case Western University. “My proposal is, just don’t make the whole thing. One team can make the engine, another the wheels. The less ambiguous morally the thing is that you are making, the more likely you can do your research unimpeded.”

“When Shao presented the group’s work this year, he added to his slides an ethics statement outlined in a bright yellow box, saying the embryoids ‘do not have human organismal form or potential.’”

This last comment seems to contradict the very emphasis of the linked article. As Regalado nicely points out: “The whole point of the structures is the surprising, self-directed, even organismal way they develop.”

Honestly, at this point, most are struggling to understand whether or not the altered stem cells have human organismal form or potential. I suspect everyone thinks they must or else researchers would not be so excited to continue this research. The value of the research increases the closer a SHEEF gets to being human. If our techniques improve, at what point does a SHEEF have the right to develop as any other normal embryo? Said differently, given their potential, and particularly as our techniques improve, is it right to create a SHEEF to be just the engine or the wheel?

Having scientists carefully picking their words and straining to avoid comparisons is not what scientists should ever be doing. Doing so obfuscates both science and ethics. Does anyone really think that is a good thing?