The ethics of mind-reading

 

A study that sounds like the stuff of science fiction was recently published in PLoS biology (If you don’t speak Scientific Gobbledygook, it is translated here). In the study, scientists were able to identify the words that human subjects were thinking by analyzing the electrical patterns in certain parts of their brain. Scientists hope that some day this line of study may lead to techniques that would allow people who cannot speak, because of some type of brain damage, to communicate by direct neural control of devices that would, literally, read their minds and speak for them.

In his book The Technological Society Jacques Ellul described the characteristics of technology in modern society. (Actually, he wrote about technique, of which technology is a subset.) One characteristic, which he termed monism, is that a technology tends to spread and be applied everywhere it can be applied without regard as to whether it is a “good” or “bad” use, because monism “imposes the bad with the good uses of technique.” Ellul provides many examples to back up his assertion.

The type of “mind-reading” described in the PLoS article is in its infancy, and may never progress beyond the stage of interesting but not very practical experiments. But it is not difficult to imagine the sort of pernicious ends for which such technology might be used if it lives up to the hope of researchers and ends up in the wrong hands — say, the paranoid rulers of a modern security state. It is not difficult to imagine what someone with wrong intent or motives could do with the power to see into another’s mind. And if Ellul is right, there will be a natural tendency for the technology to be put to such uses.

Rather than simply be reactive, the job of bioethics must be proactive, to even now, in the infancy stage of such technology, be placing safeguards around its uses to try to ensure that its potential benefit is realized while its potential threats to human thriving and dignity are thwarted. The attempt to limit technology’s application, to shepherd it into what we consider ethical uses, will go against all of the inherent tendencies of technology. It will go against all of our society’s unquestioned faith in the benefit and rule of technology. But it is necessary if such technologies are not to be used in the hands of some to wield a terrible power over others.

 

Eight is Enough

 

In response to a family’s having eight babies by IVF and gestational surrogacy:

“In this society, if you have money, you can have miracles!”

“Having children is now a luxurious game for the rich!”

“This completely topples the traditional meaning of parents.”

“From the sound of it, they just tried to have some kind of baby machine.”

“Gestational surrogacy is the business of renting out organs.”

“Why did they have to hire so many people to have babies for them? Did they think they had the right to bear children just because they were rich? Secondly, what respect to life did they show? Multiple pregnancies are super risky.”

These are reactions from the public, press, and government officials to a wealthy couple having two sets of triplets and one set of twins via IVF and two surrogates in China, where there has been an official one-child-per-family policy since 1978. Last month a southern Chinese newspaper broke the story of this family, and you can sense the angry reaction of their society in the quotes above.

(There is apparently a large surrogacy industry in China, despite a 2001 ban on Chinese hospitals doing the procedures. The manager of one surrogacy agency reports being overwhelmed with applications from aspiring surrogate mothers, most of whom are having emergencies and “need a large sum of money.”)

In the uproar, we can see erupting some of the tensions surrounding these technologies that are still somewhat under the surface in our own society: What about the divide between those who can and can’t afford reproductive technology? What does it mean to be a parent, especially where surrogacy is involved? Is surrogacy the commodification of women, the reduction of woman to womb?

There is a lot of worrying that China will catch up and surpass western economy and culture. It seems that in some areas they have already caught up with us: pushing the envelope of societal norms with the use of reproductive technologies, and the commodification of women in the process. In another area they are still far behind us: they have not yet lost the ability to be uncomfortable, shocked, even a little disgusted at the ethical implications of these technologies for families and society.

 

(Sources: Here and Here)

How private enhancement decisions led to a public health crisis

 

The proponents of using medical techniques not just for treating disease and dysfunction, but also for enhancing normal form or function, often appeal to privacy. Since most public and private insurance schemes do not pay for enhancement technologies, people who desire such “treatments” pay out of their own pockets; so, the argument goes, if they’re not hurting anybody, and they’re paying for it themselves, what’s the problem?

One of the more popular enhancement technologies worldwide is the cosmetic surgical procedure of breast augmentation. In the last few weeks a crisis of sorts has erupted around a particular brand of silicone breast implant, manufactured by the now-defunct French company Poly Implant Prothese (PIP) and exported all over Europe and South America. It turns out that the silicone used in PIP’s implants was not medical-grade, but industrial-grade, made to be used in mattresses; this may make the implants more prone to rupture. Rupture can lead to increases in inflammation and scar tissue formation.

About 300,000 of PIP breast implants are thought to have been used worldwide. This week, France and Venezuela took the step of offering to pay for the removal (but not the replacement) of all PIP implants. “We have to remove all these implants,” said Dr Laurent Lantieri, a French plastic surgeon “We’re facing a health crisis …” France will pay for ultrasounds every six months for those women who opt not to have the surgery.

Two things to note: first, removal of an implant is not like taking out a splinter. It is a major surgery, under general anesthesia, with all of the attendant risks — and expenses — of surgery. Second, other than those women who had implants inserted after breast cancer surgery, all of the women involved paid for their augmentation themselves. But now the state — that is, the citizens of France and Venezuela — will be paying for the corrective surgeries.

All techniques and technologies carry unintended and unforeseeable consequences. Even with the best planning and forecasting, all techniques will surprise us in some way. Medical techniques, because they work directly on the human body, have the potential and power to do very great unintended harm. The silicone breast implant crisis is an example of how choices made in private can have significant unforeseen consequences and costs for the public. The argument that using medicine for enhancement is merely an individual and private decision is simply not valid. How many more individuals will be hurt, and how much more will society pay, as enhancement techniques — and their unforeseen consequences — proliferate?

Losing control at Christmas

 

Throughout most of history, having children was not a matter of exerting control, but of accepting uncertainty. Whether and how the act of making love resulted in children was a mystery. In the pages of Scripture, having children — especially when one had been considered barren — was most often seen as a sign of God’s blessing: think of Eve, Sarah, Rachel and Leah, Hannah, Elizabeth …

Somewhere in the modern epoch the mindset changed. Children are still a blessing, but now they are also a liability, and we calculate how many hundreds of thousands of dollars it costs to raise a child. In the modern purview, since childbirth brings liability, it must be brought under control. The most portentous embodiment of this mindset change is the development of contraception. We now speak of “planned” and “unplanned” pregnancies — another way of saying “controlled” vs. “uncontrolled.”

But this is not enough control for moderns, for all contraception, other than abstinence, is imperfect. So when contraception fails, when we lose control, we establish the option of abortion, by which we re-assert control, by which we affirm the supreme modern value of control over life.

But even this degree of control is not enough. Why should we stop at merely preventing children, when we can control their conception? Thus we pursue reproductive technologies, by which the woman barren, like Rachel, or too-old-to-have-children, like Elizabeth, can produce a child. Yet this is still not enough; there is still too great an element of uncertainty, so we assert an ever-greater control over the process of conception by testing these children of reproductive technology before they are born or even en-wombed, in order to control who will live and who will not. Again, the mindset changes: children now are not only a blessing and a liability, but a product, manufactured to certain specifications and precise tolerances.

“Control” is not a bad thing. There are many in this world who would be much better off if they had a greater degree of control over their lives. But since we are a fallen race, the more we seize control of something, the more we ruin it in the process. We see this in our physical environment as we have increasingly asserted control over it; we will see it in our humanity if we continue in the path of controlling ourselves through enhancement and controlling our offspring through genetic manipulation.  One of the most vexing questions bioethics must answer is, How much control is right? And when have we gone too far?

Contrast the modern techno-birth with the most important birth in all of history, which was not a matter of control, but of surrender, surrendering control over birth. In the process, the “perfect” contraception — abstinence — fails! Yet from this act of surrendering comes the greatest gift the world has ever received. Is there a lesson here? Does our greatest good always lie not in grasping for greater and greater control, but in knowing when to relinquish control and surrender?

 

Of IOM, IT, EMRs, patient safety, and quality

 

If your doctor’s not looking you in the eye quite as much as he or she used to, it may be partially the fault of the Institute of Medicine (IOM).

In 1999, the IOM published a report entitled “To Err is Human: Building a Safer Health System,” which famously concluded that preventable medical errors cause up to 98,000 patient deaths annually. This was followed by the 2001 report, “Crossing the Quality Chasm: A New Health System for the 21st Century.” These reports touted, among other things, the power of health information technology (IT), including Electronic Medical Records (EMRs), to reduce medical errors, increase patient safety, and increase the quality of medical care. Subsequently, the federal government has stepped in, providing financial incentives for physicians who can demonstrate “meaningful use” of an EMR, and will soon be imposing financial penalties on those physicians who don’t climb onto the EMR bandwagon. Thus, the IOM is directly or indirectly responsible if your doctor isn’t looking you in the eye because she’s gazing into a computer screen instead.

Upon what evidence did the IOM base its assertion that EMR’s would improve safety and quality? Well … you know … it’s just kinda obvious, isn’t it? I mean, after all, it’s technology, and it’s gotta be better than paper, and it just makes sense that using more technology is better, right?

In fact, there was no data to suggest that health IT would improve either the quality or the safety of medical care. In the intervening years, as health IT implementation has exploded, there continues to be a paucity of data to suggest that health IT improves either the quality or the safety of medical care. There is good data that it introduces new errors and quality problems into health care.

Last month the IOM released a new report, calling for the formation of an independent federal body to investigate patient deaths and other adverse events caused by … drumroll, please … health information technology.

Dr. Richard I. Cook, an associate professor of anesthesia and critical care at the University of Chicago, said, “It’s not surprising that such adverse events are being found related to health IT, and it’s not surprising that those promoting these systems have neither looked for them nor anticipated them. To make large-scale investments in these systems and only now be looking at the impact on patient safety borders on recklessness.” Dr. Scot M. Silverstein, a consultant in medical informatics at the Drexel University College of Information Science and Technology in Pennsylvania, said that it is “unethical” to expand health IT so dramatically without understanding the precise nature of the risks it poses to patients.

“Reckless” … “unethical” …

Meanwhile, my doctor’s still not looking me in the eye because he’s trying to find something in the computer. Sheesh! This is quality improvement?? Have we simply created a new “Quality chasm”?

 

(The quotes above are from this story which was published in the AMA news.)

Knowing too little about too much

 

With the ability to map the human genome, we find ourselves in the bewildering position of knowing too much and knowing too little at the same time.

Consider this scenario: The year is 2015. You, being the modern that you are, want to know your future, so that you can have some degree of control over it. You’re pretty sure astrology isn’t very helpful; but you’ve been keeping up with Time and Newsweek, and you’re thinking from what you’ve read there that genetic testing offers the scientific equivalent of what astrology promises. So you go down to the local Genetics-R-Us and for a mere $99 have your entire genome analyzed in 15 minutes. You then sit down with one of their genetic consultants, who reveals that you have a 64% likelihood of developing diabetes and a 43% chance of developing colon cancer. You go on a vegan diet, exercise three hours a day, and start a regular regimen of bowel cleansing and weekly colonoscopies. You have your genome analysis results sent to your primary care provider (PCP) to be part of your medical record.

Fast forward to 2025, when you are diagnosed with a rare cancer of the nose. After a little research, you discover that this particular type of cancer can be predicted by genetic testing. Genetics-R-Us went out of business, so you go to your PCP and demand to know why she didn’t warn you about the possibility of this cancer. She steps out to do a little research and comes back into the room:

“It turns out that the gene that predisposes you to this kind of cancer wasn’t discovered until 2019, and you had this test done in 2015.”

“But when that information became available, why didn’t you go back and recheck my genome?” you reply.

“That’s the responsibility of the company that tested you,” she says, as she gets her defense lawyer on the phone.

“But Genetics-R-Us went bankrupt! You’re the only one who has the data!”

“We have thousands of these genome maps in our records, each consisting of six billion base-pairs. They are encoded in various formats, none of which are compatible with each other, and some of which are so outdated we can’t access them anymore. Plus, a 200-page update of the latest new gene dicoveries is published every month. We simply don’t have the resources to go back through everybody’s individual genome and check for all of these genetic abnormalities that are constantly being discovered.”

***

With the capability to map an individual’s genome, we can gather lots of data, but we do not yet have the knowledge of how to apply that data (much less the wisdom with which to use it!). We know too little about all that we know. As genome testing becomes more affordable and widely available, some of the ethical questions that arise are, Is there an ethical obligation to go back an re-analyze data in light of new findings? If so, whose is the responsibility?

A busy week for stem cells

Two bits of news from the world of stem cells this week:

First, Geron, the California company conducting the first ever official study using embryonic stem cells in humans, has suddenly terminated the study. Geron cited economic factors as the reason for stopping the trial. The study involved spine injury patients; Geron said only that the therapy was well tolerated, with no serious adverse events.

Second, a study using “adult” stem cells from patients’ own hearts to repair their own damaged heart tissue has produced promising results. The study’s purpose was not to show that the use of the cardiac stem cells was effective, but to make sure that the process is safe (the Geron study was also a test of safety); but study subjects receiving their own stem cells have already shown improvement in heart function.

Daniel Heumann, of the Christopher and Dana Reeve Foundation, said of the halted Geron study, “I’m disgusted. It makes me sick. To get people’s hopes up and then do this for financial reasons is despicable. They’re treating us like lab rats.”

The authors of the adult stem cell study, while warning that the results of the trial needed confirmation in larger trials, called the initial improvement in cardiac function “very encouraging.”

Geron has invested tens of millions of dollars in embryonic stem cell therapy in the past decade.

Even if one does not believe that it is unethical to destroy our offspring to find cures for our diseases, one should at least acknowledge that spending tens of millions of finite research dollars for an agenda that repeatedly uses reckless hype to gets people’s hopes up, only to dash them, is an unethical option when compared with funding “encouraging” research with a proven track record of producing successful treatments.

Science and a Christian worldview

Christian bioethics continuously lives at the interface of biotechnology and Christian moral values. Recently some students asked me to talk with them about whether I saw any conflicts between science and a Christian worldview. Their question took me back to the first CBHD bioethics conference that I attended in 2007 and Alvin Plantinga’s talk about that issue. He expressed things that I had understood, but had never heard expressed as well as he expressed them.
Plantinga made it clear that the conflict was not a conflict between Christian thought and science, but a conflict between the philosophy of naturalism and Christianity. He pointed out that many people assume that science, which is a method of acquiring knowledge about the physical world, was identical with philosophical naturalism which says that all that exists and all that we can know is what we can know through the empirical methods of science. However, understanding that science is a proper way to learn about the physical universe does not imply that naturalism is true, and science does not depend on supposing naturalism. In fact Plantinga showed that naturalism forms a very poor foundation for science, because the unguided evolution that must be assumed by the naturalist as the process by which human cognitive processes were formed does not give us reason to believe that those cognitive processes would be reliable sources for truth. (I always knew there was some reason why I liked epistemology.)
It is actually a Christian worldview that provides the foundation that science needs to function. We believe that God has created the universe so that it is rationally understandable and has given human beings the ability to accurately perceive the universe and cognitive faculties that are designed to comprehend truth. Those are the presuppositions needed to expect science to be a valid method for discovering the nature of our universe.
The problem is not that there is a conflict between science and a Christian world view. The problem is why someone without a Christian worldview would think that science is a reliable source of truth.

The ethics of PSA testing

 

The humble little PSA test has become a hot-button ethical issue.

The PSA (prostate-specific antigen) test is a blood test that can detect prostate cancer at an earlier stage than can physical exam. It is not a perfect test; it misses about 25% of cancers. But it is the best thing we have for detecting prostate cancer early.

The United States Preventive Services Task Force (USPSTF) reviews all of the available evidence regarding screening tests for various conditions, and makes recommendations based on the scientific evidence. Earlier this month, the USPSTF posted a draft of its update to its 2008 prostate cancer screening guidelines. The earlier guidelines had recommended that men over 75 not be screened with a PSA test, and said that there wasn’t enough evidence to make a recommendation one way or the other for younger men. The proposed new guidelines, based on more recent studies, go further, giving screening a “D” recommendation, which means that there is moderate or high certainty that the service has no net benefit, or that the harms outweigh the benefits, and the task force discourages use of the service.

But how can a PSA cause harm? It’s just a poke in the arm, right?

It is not the test itself that causes harm, but what we do with it. 90% of men with PSA-detected prostate cancer undergo radiation and/or surgical treatments that have considerable risks and side effects. The chair of the USPSTF said that for every 1,000 men treated for prostate cancer, five die of perioperative complications; 10-70 suffer significant complications but survive; and 200-300 suffer long-term problems, including urinary incontinence, impotence or both.

These numbers might be acceptable if there were evidence that treating early prostate cancer did some good. But, counterintuitive as it may seem, studies have shown little if any positive benefit from treating prostate cancer early. When men diagnosed and treated by PSA screening are compared with those who are not treated, there is virtually no reduction in prostate cancer mortality at 10 years.

J. A. Muir Gray wrote, “All screening programmes do harm; some do good as well.”

For a profession that takes seriously Primum non nocere, “FIrst, do no harm,” it seems, with what we know at the present time, that this particular screening test may contravene our first ethical principle.

Human cloning “breakthrough?”

 

Last week, Joe Gibes commented on the BBC’s report on the scientific “breakthrough” of  Dieter Egli and his colleagues at the New York Stem Cell Laboratory. As the BBC and other news agencies presented it, Egli et al derived human embryonic stem cells through a “cloning” technique, but as Joe correctly noted, no clones were produced. In transferring the nucleus of an adult skin cell to a nucleated human egg and retaining both sets of DNA during subsequent embryonic development, the researchers had actually  created not clones but  genomic hybrids with, I would add, a generally lethal defect (triploidy).

I see nothing sinister behind the imprecise language, and I don’t think Joe does either. It is, in fact, understandable as the method employed was essentially a modification of the technique (“somatic cell nuclear transfer”) that Ian Wilmut and his colleagues at the Roslin Institute used 15 years ago to create “Dolly” the sheep, the world’s first clone of an adult mammal. The chief difference is that Wilmut et al transferred the donor nucleus to an enucleated egg.

Cloning adult mammals is, to put it gently, no piece of cake. In their ground-breaking publication that introduced Dolly to the world, Wilmut et al reported making 434 attempts to fuse a nucleated donor cell  to an enucleated egg. That effort yielded 277 (63.8%)  zygotes (“fused couplets”) which they then transferred to ligated sheep oviducts for culture. Of those 277,  247 (89.2%) were recovered and of those, only 29 (11.7%) developed to a transferable stage (morula/balstocyst). As a fraction of the original 434 eggs, that represents 6.7%. Now, 14 years after the initial report of Wilmut et al, the process of producing clones from adult mammalian cells remains highly inefficient.

The “Holy Grail” of cloning, as Joe and others have put it, is not a cute little lamb, but a human embryo. Most researchers in the field recoil at the idea of cloning to reproduce full-sized copies of ourselves, but they salivate at the prospect of creating disposable, embryonic miniatures whose genomic identity would purportedly constitute for their “parents” the key to great medical benefit. Human biology, however, has proven quite resistant  to such designs, and that, I would submit, is the “story behind the story” of Egli et al. The whole reason these researchers moved to the hybrid model was because of their inability, and that of many others before them, to produce an embryonic human clone using techniques that, albeit with great inefficiency, have proven successful in animals.[1] What the BBC and others tout as a “breakthrough” is, in fact, little more than an affirmation of the status quo in human cloning research.

Whether human biology will continue to frustrate the dogged efforts of Egli and others to produce a human clone, only time will tell. But at this stage, reality certainly mirrors fiction as the quest for the “Grail” remains exactly that – a quest. Sadly, this quest to clone ourselves exacts a great toll as it drains finite resources and, more concernedly, does great violence to human dignity with its reduction of human life to an object of mere utility.  That we would dump the quest and focus our health resources elsewhere seems a right and sensible thing to do, but I’m not banking on that happening soon as many a policy-maker and researcher are “all-in”  in the gamble on so-called “therapeutic” cloning. That researchers have already discovered a method for re-programming adult human cells to a pluripotent state that requires neither a human egg donor nor an embryonic intermediate reveals the ongoing quest to produce a human clone to be less about advancing good science and medical therapy and more about satisfying a prior agenda.


[1] Sheep (1997), Mouse (1998), Gaur (2000), Pig (2000), Mouflon Sheep (2001), Cat (2002), Cow (2002), Goat (2002), Rabbit (2002), Deer (2003), Horse (2003), Mule (2003), Rat (2003), Wildcat (2003), Dog (2005), Banteng (2005), Ferret (2006), Swamp Buffalo (2006), Gray Wolf (2007).  See the US Food and Drug Administration online publication “Technology Overview: Somatic Cell Nuclear Transfer and Other Assisted Reproductive Technologies” at http://www.fda.gov/animalveterinary/safetyhealth/animalcloning/ucm124765.htm