Raiding the CRISPR

A couple of gene-editing news items from this week’s science literature:

First, Nature reports that a group in my “back yard,” at the University of California San Diego, has tested gene editing using the CRISPR approach in mice.  Recall that CRISPR is an acronym for a particular molecular mechanism, first discovered in bacteria, that is particularly efficient—though not perfectly so!—at editing genes.  The idea is to find a “bad” gene that you’d like to replace, for example to prevent or treat a disease, and edit it to be the normal version of that gene.

The kicker in this particular case in mice is that it tested something called “gene drive.”  In classical genetics, humans (and other higher organisms) have two copies of each gene.  In sexual reproduction each parent passes one copy of the gene to offspring, so the chance of a particular gene being handed down is 50%.

“Gene drive” is a technique designed to change those odds, and make a particular gene “selfish,” and much more likely to be passed on.  In fact, the idea is that transmission would be 100%, or nearly so.  If that worked, then a new gene would soon take over a population of organisms, and every member would, in a few generations, have that gene.

Why might that be a good thing?  Suppose you are interested in pest control, and you could use the technique to make, say, mosquitoes infertile.  Then they would soon all die off.  Or if you had some other “desirable” characteristic, you could make it so all members of a species (rodents?  Cattle?  People?) have that characteristic.  Assuming it’s determined by one gene, that is.

And assuming that the technique works.  In the mouse experiment, efficiency was only 73%.

That’s probably good news.   This is one of those techniques that could have serious unintended consequences if tried in the field.  Scientists have been warning about that.  It looks like it’s a way off, but something else to fret about.

The second item involves a clinical trial to treat sickle cell anemia.  In this one, blood stem cells from a person with the disease are removed from the bloodstream and gene-edited outside the body to make hemoglobin that is not as damaged as in the disease (SCA is an inherited disease in which the red blood cells have abnormal hemoglobin that doesn’t carry oxygen well).  Then the altered cells become the therapy, and are given back to the patient.

The FDA has put a “clinical hold” on this clinical trial.  Exactly why has not been publicly disclosed (it doesn’t have to be), and it sounds like the trial itself hadn’t started yet, but that the company developing it was getting ready to start.  This is, in my view, an approach to gene editing that does not pose special or particularly worrisome ethical issues, because the genetic changes are done on “adult” stem cells to treat an existing individual with a disease in a way that would not entail transmission of altered genes to future generations.

And, probably, it’s a case of “this too shall pass,” and the FDA’s concerns will be answered and the trial will proceed.

But check out the sidebar reporting this in Nature Biotechnology.  If you follow the link you will probably get a prompt asking for payment but I was able to sneak a free read on my screen.  If you go there, read below the separate quote (itself picked up from The New York Times) from Dr. George Church of Harvard:  “Anyone who does synthetic biology [engineering of biological organisms] should be under surveillance, and anyone who does it without a license should be suspect.”  Apparently he said that in response to “the publication of an experiment recreating a virus that has engendered fears that such information could be used to create a bioweapon. ”

The old “dual use problem,” eh?  We should really fret about that.

Labs are growing human embryos for longer than ever before

That’s only a slight paraphrase of a news feature article this week in Nature.  The clearly-written article is devoid of scientific jargon, with helpful illustrations, open-access online, and readily accessible to the non-specialist.  Check it out.

Key points include:

  • Scientists who do not find it ethically unacceptable to create and destroy human embryos solely for research purposes continue to follow the so-called “14-day rule,” by which such experimentation is limited to the first 14 days after fertilization. At that point, the human nervous system starts to form and the time for twinning is past.
  • The 14-day rule is law in some nations, but until now has not been a practical issue because scientists have been unable to grow human embryos that long in the laboratory.
  • That technical limit has been sufficiently overcome that embryos are now surviving for almost 14 days. Scientists have not directly challenged the 14-day rule yet, but might, and would like to revisit it.
  • Experiments on human embryos in that time have included editing of critical genes to see what happens (sometimes they stop growing), and making hybrids of animal embryos with human cells whose purpose is to “organize” embryonic development rather than remain part of the developing individual.
  • Embryo-like structures, referred to as “embryoids” in the article, and sounding similar to “SHEEFs” (“synthetic human entities with embryo-like features”) are also being created. These entities don’t necessarily develop nervous systems in the same way as a natural embryo, prompting questions of just how much they are like natural embryos, whether the 14-day rule applies, and whether they raise other ethical concerns.

The last paragraph of the article, reproduced here with emphases added, is striking and more than a little ironic in light of arguments that embryos are “just a clump of cells”:

As the results of this research accumulate, the technical advances are inspiring a mixture of fascination and unease among scientists. Both are valuable reactions, says [Josephine] Johnston [bioethicist from the Hastings Center]. “That feeling of wonder and awe reminds us that this is the earliest version of human beings and that’s why so many people have moral misgivings,” she says. “It reminds us that this is not just a couple of cells in a dish.”

Vaccines: Modern Trolley Car Dilemmas

The Trolley Car dilemma is back in bioethics news. For those unfamiliar with the trolley car dilemma, you alone are responsible to operate a trolley track switch to divert an out-of-control trolley car away from five workers on one section of track only to cause the death of a lone worker on the only alternate section of track. The dilemma: someone is going to die, and you get to decide who. In a recent editorial in the June 13th New England Journal of Medicine, Dr. Lisa Rosenbaum nicely describes the utilitarian dilemma surrounding the public health risks and benefits associated with a vaccine for the dengue virus, a mosquito-borne virus that annually causes significant severe illness and death worldwide. The dengue vaccine, Dengvaxia, is a real-world trolley car dilemma. Dengvaxia presently can protect large numbers of patients from this deadly virus, but at the expense of causing severe illness and death in a much smaller number of patients, mostly children.

Dr. Rosenbaum describes our response to utilitarian thinking, correctly I think. We don’t mind utilitarian rules that negatively affect others, particularly when the rules tend to confer benefit to our group as a whole (the very definition of utilitarianism) but we resist utilitarian thinking when it threatens to affect us negatively as an individual despite overall benefit to the rest of our group. Healthy self-interest often conflicts with the utilitarian calculus that purports to determine the overall benefit to the group. In the case of Dengvaxia, if the deaths caused by the vaccine only occurred in people who would have died from the natural dengue virus anyway, there would be no problem. In other words, by golly, you all were going to die from the widespread disease anyway, and since the vaccine did save some of you from dying, there is really no new or additional loss. Net positive outcome, right?

Sadly, vaccines do not work that way. With Dengvaxia, it may be possible to create a pre-vaccine test for seropositivity for the virus. This would mean determining whether a person previously had a very mild case of the virus such that they would not suffer a catastrophic outcome from receiving the vaccine, thereby allowing them to safely receive the vaccine to prevent a more severe case of dengue in the future. Such a screening test may be possible but it would cost some unknown amount of additional money and would still not be 100% accurate. Even so, no vaccine is 100% safe.

How many lives would need to be saved and at what cost before we are satisfied with the cost/benefit ratio of Dengvaxia (or any vaccine for that matter)? Presently the World Health Organization is recommending a pre-vaccination test be developed and only vaccinate those who test positive for prior exposure. This is effectively saying that the vaccination is not only not required but not even presently recommended in endemic regions, this despite the fact that Dengvaxia clearly significantly reduces overall mortality and morbidity. If the disease were more contagious and more lethal than dengue, at what point does the vaccine, however imperfect, become mandatory? This is the ultimate trolley car switch for public health officials.

Aren’t trolley car dilemmas fun?

A safety concern with gene editing

Hat-tip to Dr. Joe Kelley for bring this to my attention…

As readers of this blog will recall, there is keen interest in exploiting recent discoveries in genetic engineering to “edit” disease-causing gene mutations and develop treatments for various diseases.  Initially, such treatments would likely use a patient’s own cells—removed from the body, edited to change the cells’ genes in a potentially therapeutic way, then return the altered cells to the patient’s bloodstream to find their way to the appropriate place and work to treat the disease.  How that would work could differ—make the cells do something they wouldn’t normally do, or make them do something better than they otherwise do (as in altering immune cells to treat cancer); or maybe make them work normally so that the normal function would replace the patient’s diseased function (as in altering blood cells for people with sickle cell anemia so that the altered cells make normal hemoglobin to replace the person’s diseased hemoglobin).

Or maybe we could even edit out a gene that causes disease (sickle cell anemia, Huntington’s disease) or increases the risk of disease (e.g., BRCA and cancer) so that future generations wouldn’t inherit it.  Or maybe we could edit genes to enhance certain health-promoting or other desirable qualities.

The recent scientific enthusiasm for gene editing is fueled by the discovery of the relatively slick and easy-to-use (if you’re a scientist, anyway) CRISPR-Cas9 system, which is a sort of immune system for bacteria but can be used to edit/alter genes in a lot of different kinds of cells.

It turns out that cells’ normal system to repair gene damage can and does thwart this, reducing the efficiency of the process.  The key component to this is something called p53, a critical protein that, if abnormal, may not do its repair job so well.  When that happens, the risk of cancer increases, often dramatically.  In cancer research, abnormal p53 is high on the list of culprits to look out for.

Two groups of scientists, one from the drug company Novartis and one from the Karolinska Institute in Sweden, have published on this.  P53’s thwarting of gene editing is particularly active in pluripotent stem cells, that are some, but not the only, candidate cells to be edited to create treatments.  These cells are also constituent cells of human embryos.  If the CRISPR-Cas9 process is used on these cells, p53 usually kills them off—unless it’s lacking or deficient, in which case it doesn’t, but also in which case it means that the altered cells could themselves become cancers, later on.

This is something that has to be monitored carefully in developing cells as medicines, so to speak, with genetic editing.  One does not want the patient to appear to be healed, only to develop a cancer, or a new cancer, later on.  One certainly would want to know the risk of that before editing an embryo—an unborn human, a future baby if placed in the right environment—to create a gene-edited human being.

Yet, as I’ve written here in the past, it appears that experimentation in heritable gene editing is pressing on.  I’ve argued, and continue to argue, that heritable human gene editing is a line that must not be crossed, that would place too much trust in the providence of the scientists/technologists who are the “actors” exerting power over fellow humans who become “subjects” in a deep sense of the term; that the risks to the subjects are undefinable; that it would enable perception of humans as “engineering projects”; that the gift of life would tend to be replaced by seeking to limit birth to “the people we want”; that the people acted upon are unable to provide consent or know what risks have been chosen for them by others, even before birth.  Rather than press ahead, we in the human race should exercise a “presumption to forbear.”

A counter argument is that, in limited cases where the genetic defect is limited and known, the disease is terrible, treatment alternatives are few or none, that the risks are worth it.  The recent papers seem to expose that line as a bit too facile.  How many embryos created (and destroyed) to develop the technique before “taking it live?”  Could we work things out in animals—monkeys, maybe?  How many generations to alter, create, and follow to be sure that a late risk—such as cancer—does not emerge?  Or maybe our animal rights sensibilities stop us from putting monkeys at such risk—maybe mice will do?

The new papers are dense science.  Frankly, I can grasp the topline story but have trouble digesting all the details.  More sophisticated readers will not be so impaired.  The news report, in the English of the general public, can be read here, the Novartis and Karolinska reports read (but not downloaded or printed) here and here, respectively.

Coming home to roost

Hoo boy.

Scientists who want to study human embryonic development have heretofore been self-limited by a 14-day rule:  embryos can only be experimented on up to 14 days of age, when they start to develop a nervous system.  This is an attempt to avoid censure for unethical experimentation on human subjects, and is seen as something of a concession, since it does not accept that human life begins at conception.

And, inevitably, they seek work-arounds.  One reported this week by Nature is the creation of human chicken hybrid embryos.  Why would someone want to do this?  (Jokes about the San Diego Chicken are NOT called for here.)

Well, apparently 14 days of embryo age is when critical organization takes place, directed by “organizer cells” that don’t appear before then.  So a group of researches did this:  they took embryonic stem cells (which itself might well require creating and destroying an embryo), and made “embryo-like structures” that had cells that either were, or were just like, these “organizer” cells.  (Apparently these structures were not capable of growing into babies, but even if not, ethical issues remain.)  Then they transplanted these cells into chicken embryos, and watched the resulting hybrid grow, and learned something about how human embryos develop.  They figure this is less of an ethical problem than trying to experiment on a fully human embryo older than 14 days, and that hybrids like this might be able to take the place of experimenting on human embryos to answer many of their questions.

Other scientists disagree with this last statement, and still think they must experiment on fully human embryos to get their answers.

Either way, at a minimum it seems that this work will require creating embryos solely for research, and there is in principle no limit on manipulating the human organism in the name of knowledge.  Work is common on some kinds of “hybrid” animals with human cells, such as immune-deficient mice who have human cells transplanted to reconstitute their immune systems.  But that work usually is done with human cells transplanted into fully-formed mice, which appears different from early, hybrid embryos.

The article describing this work says that the hybrid embryos “didn’t live long enough to hatch.”  Wonder what they would have been like if they had.

One Man’s Trash is Another Man’s DNA Treasure

Last month, investigators used big data analysis, public DNA genealogy websites and “Discarded DNA” to identify the Golden State Killer (WSJ subscription needed), an individual believed responsible for over 12 murders, greater than 50 rapes and over 100 burglaries in California between 1974 through 1986. While justice may be served if the legal case remains solid, there are some interesting bioethical issues that warrant discussion.

This blog has previously discussed the ethics of searching reportedly anonymized databases and the ability of algorithms to “unanonymize” the data (see HERE and HERE). The current technique used in the Golden State Killer case takes this one step further. Using a public genealogy database site, where individuals looking for distant relatives voluntarily share their personal DNA samples, investigators looked into these databases for partial DNA matches. A partial DNA match means that the investigators were looking for any relatives of the original suspect hoping to gain any identifying information of the relative, leading back to the original suspect. Then, using this narrower group of DNA relatives, investigators literally collected DNA samples this group of people unwittingly left behind, such as skin cells on a paper cup in the trash, so called discarded or abandoned DNA.

One man’s trash is another man’s DNA treasure.

Presently, neither the method of partial DNA search of public voluntary genealogy databases nor the collection of discarded DNA samples violates the 4th Amendment regarding unreasonable search and seizure. Neither the Health Insurance Portability and Accountability Act of 1996 (HIPAA) nor the Genetic Information Nondiscrimination Act of 2008 (GINA) provide protection as none of the data relates to health care records or employment, respectively.

Shouldn’t some law or regulation prevent my personal DNA code from becoming public, particularly if I have not taken steps to publicize it on one of the many public voluntary genealogy sites?

Since your DNA is the ultimate physical marker of personal identity, how much control do you or should you have over it? While you may wish to live a life of anonymity, your extroverted cousin who voluntarily provides her DNA to a public DNA database has just unwittingly publicized some portion of your family DNA as well as traceable personal family data that may allow others to know more about you than you desire. An energetic sleuth dumpster-diving your trash can retrieve your actual DNA. I shred my mail to avoid my social security number or other personal financial information from being obtained and used for identity theft. How do I “shred my DNA” to prevent it from being similarly recovered from my trash?

What may someone do with my DNA information obtained using these techniques. What should someone be able to do?

You could not have convinced me back in 2001 that anyone would spend money to build cars with 360 video equipment and figure out optimal routes that would eventually become what is now Google Street View. Might not someone do the same thing with trash-sourced DNA samples, perhaps Google DNA View?

We already have figured out the garbage truck routes.

More on genetic medicine

The third and final installment from The Code, a series of 3 short documentaries on the internet about the origins of genetic medicine, is entitled “Selling the Code.”  This is about genetic testing to try to predict risks of diseases, among other things.  Doctors use some of this testing in clinical care and a burgeoning amount of research.  A number of companies, such as 23andMe, will, for a (not-too-high) price, sequence your genes, or at least some of them, from a cheek swab sample you send, and then give you a report of what the results are and what they might mean.  In cases where there is a simple connection between a genetic abnormality and a disease—if you have the gene, you get the disease—the approach can be very helpful.  But it’s rarely simple.  Even for known cancer-propensity genes like BRCA1 and BRCA2, there are many variants, and what they mean clinically is far from fully known.  In fact, for most of the common disease we care about, like heart disease, diabetes, and most cancers, the story is complicated indeed.  So what to do with the information is often far from obvious, and careful genetic counseling by a physician who specializes in genetic medicine is a must.

23andMe ran afoul of FDA a couple of years ago, leading to a long process that resulted in FDA acceptance of a more limited menu of testing by the company.

And some companies will sell you “genetic information” for more trivial concerns—presuming to tell you something meaningful about what fitness regimen you should pursue, or what wine you’ll like.  Caveat emptor, I suppose, although the risks are low for some of this.

AND—companies like 23andMe keep anonymized data bases of the genetic information they get for and from their customers, and sell that information to drug companies to support the latters’ research.  An individual can’t be identified in the process (at least, not readily, see my January 2013 post about “DNA research and (non)anonymity”) but the data in the aggregate is valuable to the genetic sequencing company.

These kinds of concerns—particularly what to do with an individual’s information, but also the usefulness of having genetic data on a large group of people to understand disease and help discover new treatments—are germane to an ongoing project of the Hastings Center to assess the implications of genetic testing of the whole genomes of large numbers of babies, to screen for any of several dozen genetic diseases.   Again, most of the babies will be perfectly healthy, and the yield from screening for rare conditions is low.  But people arguably have a right to know about themselves, and parents to know about their newborns.  Yet still, to what end will we use information that we don’t fully understand?  Read a good Los Angeles Times article, that overlaps some of the points in The Code’s video, and provides other useful information in quick-and-easy form, here.

Finally, I was gratified to read that a project to synthesize an entire human genome in the laboratory is being scaled back, at least for now.  Apparently, they can’t raise enough money.  I bet would-be investors aren’t convinced they could own the results and guarantee a return on their money.  I fretted about this in May of 2016 and again in July of the same year.  I encourage readers to click through and read those, as well as the concerns raised by Drew Endy of Stanford and Laurie Zoloth of Northwestern, who criticized both the effort in concept and the closed-door, invitation-only meeting at Harvard to plan it.

That was two full years ago.  A lot is going on under our noses.

Deep Brain Stimulation: the New Mood Modifier?

A patient of mine recently had a deep brain stimulator (DBS) placed to reduce her severe tremors. The stimulator has worked very well to almost eliminate her tremor but has resulted in a side effect that causes her personality to be more impulsive. Her husband notices this more than the patient. Both agree that the reduction in the tremor outweigh the change in her personality though her husband has indicated that her personality change has been more than he imagined when they were initially considering the surgery. He has commented that if her new impulsivity were any stronger, he might be inclined to reverse the process. As one might imagine, the patient sees no problem with the impulsivity and remains extremely pleased with her newfound lack of tremor.

I share the preceding clinical vignette as backdrop to a recent article in Nature describing research funded by the US military’s research agency, The Defense Advance Research Projects Agency (DARPA – the same group that sponsored the early development of the Internet), where they are looking into modifying neural activity with the goal to alter mood, and eventually cure mental health disorders. Using patients that already have DBS stimulators in place for treatment of epilepsy or movement disorders such as Parkinson’s Disease, scientists are developing algorithms that “decode” a person’s changing mood. Edward Chang, a neuroscientist at the University of California, San Francisco (UCSF) believe they have a preliminary “mood map” and further believe that they can use the DBS stimulators to stimulate the brain and modify the local brain activity to alter the patient’s mood. The UCSF group describes this as a “closed-loop” (using the stimulator to both receive and then stimulate the brain). Chang further admits that they have already “tested some closed-loop stimulation in people, but declined to provide details because the work is preliminary.”

If scientists are on the verge of changing your mood, might they not also be on the verge of creating your urges? Professor Laura Cabrera, a neuroethicist, and Professor Jennifer Carter-Johnson, a lawyer, both at Michigan State University, argue we need to begin worrying about that possibility and further that we need to begin considering who is responsible for those new urges, particularly if those urges result in actions that cause harm against other people. The article does a masterful job of the ethical-legal ramifications of just what happens when your DBS causes you to swerve your car into a crowd of people – Is it your fault or did your DBS make you do it?

Returning to my patient, the alteration in her behavior is an unwanted but not a completely surprising result of her DBS to treat her movement disorder. Despite the informed consent, her husband was not prepared for the change in her personality. The treatment to correct my patient’s movement disorder (a good thing) has altered my patient’s personality (a not-so-good thing). My patient’s husband might even argue that his wife is almost a different person post DBS.

When we modify the brain in these experiments, we are intentionally modifying behavior but also risk modifying the person’s actual identity – the “who we are”. As the DARPA experiments proceed and cascade into spin-off research arms, we need to be very clear with patient-subjects in current and future informed consents that the patient who signs the consent may end up very different from the patient who completes the experiment. How much difference in behavior or urges should we tolerate? Could the changes be significant enough that they are considered a new person by their family and friends?

And if that is true, who should consent to the experiment?

New short videos on genetic topics

This week, an email from the Hastings Center promoted The Code, a series of 3 short documentaries on the internet about the origins of genetic medicine.  The three are being released one week at a time.  The first, released this week, briefly (12 minutes) reviews the determination, or sequencing, of the entire human genome, a project conducted in the 1990’s, and completed in 2000, by two labs—one in the government, one private—that initially worked in competition but ended working in collaboration.

It’s a nice review of the key points:

  • A person’s entire genome can be read fast—in a few hours—by an automatic process, at an ever-decreasing cost that now is on the order of $1000.
  • We still are FAR from understanding what the genetic code means for human disease. The number of cases in which there is a reasonably direct link between a single, or a small number, of genetic abnormalities and a gene, in a way that allows us to predict risk of disease or be able to make an enlightened selection of treatment, is still small.
  • With more reading of peoples’ genomes, and more computing power, what amounts to a massive pattern-recognition problem will likely yield more solutions that can be practically exploited to the benefit of human health. Some entities are collecting more peoples’ genomes in a database, for ongoing analysis and, at first, hypothesis generation—that is, “maybe this is a lead that could be acted on for benefit, after the proper follow-on research.”
  • But for now, we should not get carried away—”personalized medicine” is not generally “ready for prime time,” but useful only in a few specific situations, and often most appropriately the subject of new medical research. And one should be careful to get well-informed advice from a medical professional who is expert in genetic medicine, and not over-interpret what a commercial entity might be advising.  (But that, about which this blog has commented in the past, is for another time and another posting.)

This first video does not get into ethical issues—e.g., of justice, privacy, and the like.  But it is a good, quick, engaging overview suitable for the general public.  (BTW, I hate calling non-scientists and non-physicians “lay people,” a term I think best reserved to distinguish most of us from the clergy, and the abuse of which just reinforces the notion of medical scientists as a sort of “priesthood.”)

The second video in the series, due out next week, is on gene editing, and the third, the week after, will address companies that are willing to sequence your genes and tell you, for a price, what they think it might all mean.

The Ethics of Pet Cloning

Anyone who passes through a grocery checkout line on a weekly basis is unable to remain ignorant of the latest thoughts and insights from Hollywood. With ethical pronouncements from Hollywood, I usually find it reliable to point my moral compass in the opposite direction, at least until I have time to further evaluate the issue. Such was the case with a recent National Enquirer scoop that Barbara Streisand has cloned her now deceased Coton du Tulear dog Samantha, producing two offspring, Miss Violet and Miss Scarlett. The fact that she cloned her pet was interesting in its own right, as I did not realize this process was commercially available to the general (wealthy) public. Perhaps more interesting was the backlash Ms Streisand has experienced from Twitter (generally) and PETA (specifically) largely on ethical grounds. More on this in a moment. The Streisand scoop actually should be credited to a Variety interview and the initial ethical discussion to both the New York Times and Fox News (offering, no surprise, differing vantage points)

Sone of Streisand’s harshest criticism came from Twitter under the hashtag #adoptdontclone. One argument against the pet cloning process was that it was unjust; given the fact that only rich people could afford the price tag, which according the NYT link above ranged around $50,000. Another argument against the process was to remind Ms. Streisand that Miss Violet and Miss Scarlett were not the same as the original Samantha, even though they might look or even act in a manner that might remind Streisand of her dear departed. These arguments touched on the very themes of genetic determinism vs. environmental nurture, admittedly in a rudimentary way. The PETA arguments described the pain and suffering they claimed that the female dogs experienced during the required egg harvesting needed for the cloning process to be successful, arguments eerily similar to risks women experience related to egg harvesting for some IVF procedures.

The strongest or, at least, most popular argument leveled at Ms. Streisand was that cloning her pet eliminated the possibility that she might adopt an already existing puppy, who very much needed a loving pet owner to provide that puppy a better future. While no one presently is making a similar argument against human cloning in favor of human adoption (since human cloning is presently illegal), similar arguments have been made with IVF vs. adoption.

The point of all this was to appreciate some of the ethical arguments by the lay press presently used against pet cloning by Hollywood’s elite and wonder whether, if and when human cloning is accessible to the general (wealthy) public in the future, similar arguments will resurface to protect the humans involved then with the same loud voice used to protect the animals now.