Unenhanced Thoughts about Neural Enhancement

An April 20th post in the Hastings Center’s “Bioethics Forum” brings attention the recent report by the Presidential Commission for the Study of Bioethical Issues (PCSBI) entitled, “Gray Matters: Topics at the Intersection of Neuroscience, Ethics, and Society.

Chapter 2, “Cognitive Enhancement and Beyond” is a useful summary of issues surrounding “cognitive enhancement,” and provides a brief overview of three scientific goals: maintaining or improving neural health and cognitive function, treating disease and other impairments, and expanding or augmenting function above the normal human ranges. The PCSBI uses the term “neural modifiers” to refer to the broad array of agents that act on the brain across this spectrum of interventions.

Ultimately, the PCSBI provides sensible recommendations regarding the study and use of “neural modifiers”. It rightly attends to “societal background conditions” such as diet, sleep, exercise, and an environment unburdened by toxic agents as a top priority. Other recommendations include the need to prioritize treatments, and to “study novel neural modifiers to augment or enhance neural function.” This is not a commitment to the idea that they are a good idea, only that more information is needed to guide their ethical use. While the PCSBI leaves an open door to the possibility that there may be ethical uses for cognitive enhancement and augmentation, it is protective of children, drawing an ethical line: “Clinicians should not prescribe medications that have uncertain or unproven benefits and risks to augment neural function in children and adolescents who do not have neural disorders.”

The PCSBI raises important cautions about long-term effects, over-medicalization, and exploitation by those who would stand to gain the most (in this they cite the pharmaceutical industry, but we would do well to consider that other persons or groups could also find reasons to exploit). The most important contributions of this publication, however, come in the form of questions that the PCSBI does not, and cannot, answer in its brief report. Three such comments stand out:

  • “What might happen, scholars ask, to traditional understandings of free will, moral responsibility, and virtue if science makes significant advances in the ability to technologically control the mind?”
  • “Further, when we consider altering our memories, we trigger concerns at the core of defining one’s self.”
  • “This desire for control might erode our appreciation for natural human powers and achievements.”

The PCSBI urges that more research be performed in order to provide us with more evidence for ethical decision-making, and that “professional organizations and other expert groups…create guidance about the use of neural modifiers.” They do us a service to highlight these concerns. But we must recognize that while the scientific method may produce clinical evidence to facilitate ethical decision-making, the foundational sense of what it means to be human will never come from a randomized controlled trial.

What Should We Forget?

In January MIT announced a research study published in the journal Cell that reported a way to erase traumatic memories in lab mice using a drug that makes the brain “more plastic, more capable of forming very strong new memories that will override the old fearful memories.” MIT opened its story by referring to “nearly 8 million Americans [who] suffer from posttraumatic stress disorder (PTSD),” and went on to cite potential therapeutic benefits: “The war veteran who recoils at the sound of a car backfiring, and the recovering drug addict who feels a sudden need for their drug of choice when visiting old haunts have one thing in common: Both are victims of their own memories.” The research got positive reviews, including from Dr. Jelena Radulovic, a professor of psychiatry at Northwestern University’s Feinberg School of Medicine, who declared that “the mechanisms that were discovered will provide us with new tools to study memory and maybe tackle fear responses in patients.”

Sheer fascination with the workings of the human brain seems sufficient to justify research into the wonder of human memory; it is quite another step to claim a therapeutic benefit from erasing it.

Memories are not isolated phenomena that can easily be sliced from our lives. They are integrated into our existence, indeed woven in, and become part of our very identity. Far from being disposable, they bring coherence to events around us, including our relationships. We already know the distressing nature of amnesia. If a memory of great personal significance could be erased, would not the discordance in one’s life eventually cause distress? And would not someone with such a gap then seek to understand what filled the gap? Pulling one thread, even if science could be so exact with a chemical, could lead to an unraveling unanticipated by the most well-intended therapist.

To argue that memories should be “extinguished” or excised is also to forget the purpose of memories. We need memories, even the bad ones. In the research cited above, the memory extinguished was that of an electric shock repeatedly received in a specific chamber. How “therapeutic” is it to the mice to forget that they should not venture there again?

But memories do more than help us avoid dangers. They motivate us. They give us reasons to rise above our previous existence. They produce the greatest of human character and achievement. And those are not necessarily our own memories. For to see a memory as something that can be therapeutically plucked from one person’s mind is to view not just one human’s experience as a compilation of disposable parts, but human relationships as well. We all need what each other have learned. From each person’s painful memories are opportunities to learn, grow, find “common sense”, and transcend. They are our chance to become something better than we could be on our own.

There are other concerns as well. What would be the ethical principles for deciding what to erase and what not to? How would we guarantee that this could not become a nefarious tool in the hands of those seeking to harm? We could not, of course.

Erasing memories is like a great denial. It’s what we would do if we didn’t want to deal with the most difficult of circumstances…in other people’s lives. But need it be said that erasing a memory does not erase an evil that caused it, and may render us less able to defend against it?

The ethics of mind-reading


A study that sounds like the stuff of science fiction was recently published in PLoS biology (If you don’t speak Scientific Gobbledygook, it is translated here). In the study, scientists were able to identify the words that human subjects were thinking by analyzing the electrical patterns in certain parts of their brain. Scientists hope that some day this line of study may lead to techniques that would allow people who cannot speak, because of some type of brain damage, to communicate by direct neural control of devices that would, literally, read their minds and speak for them.

In his book The Technological Society Jacques Ellul described the characteristics of technology in modern society. (Actually, he wrote about technique, of which technology is a subset.) One characteristic, which he termed monism, is that a technology tends to spread and be applied everywhere it can be applied without regard as to whether it is a “good” or “bad” use, because monism “imposes the bad with the good uses of technique.” Ellul provides many examples to back up his assertion.

The type of “mind-reading” described in the PLoS article is in its infancy, and may never progress beyond the stage of interesting but not very practical experiments. But it is not difficult to imagine the sort of pernicious ends for which such technology might be used if it lives up to the hope of researchers and ends up in the wrong hands — say, the paranoid rulers of a modern security state. It is not difficult to imagine what someone with wrong intent or motives could do with the power to see into another’s mind. And if Ellul is right, there will be a natural tendency for the technology to be put to such uses.

Rather than simply be reactive, the job of bioethics must be proactive, to even now, in the infancy stage of such technology, be placing safeguards around its uses to try to ensure that its potential benefit is realized while its potential threats to human thriving and dignity are thwarted. The attempt to limit technology’s application, to shepherd it into what we consider ethical uses, will go against all of the inherent tendencies of technology. It will go against all of our society’s unquestioned faith in the benefit and rule of technology. But it is necessary if such technologies are not to be used in the hands of some to wield a terrible power over others.


The End of Morality

Part 2 of 2

In the grandiosely titled article “The End of Morality,” published in the July/August Discover, Kristin Ohlson writes of brain experiments not unlike those I wrote about yesterday in “Toward a Brain-Based Theory of Beauty.” Researchers placed subjects in functional MRI scanners, gave them moral dilemmas to think about, and mapped the areas of the brain that lit up during the experiment.

The similarities between the two articles end there. Where the studiers of beauty went no further than asserting what could rightfully be asserted, that there was a correlation between perceptions of beauty and certain areas of brain activity, the studiers of morality marched right past correlation into causation:  “You have these gut reactions and they feel authoritative, like the voice of God or your conscience.  But these instincts are not commands from a higher power.  They are just emotions hardwired into the brain as we evolved.”  Where the beauty study interacted with centuries of thinkers and thoughts about beauty, the studiers of morality are ready to discredit “that inner voice we’ve listened to for tens of thousands of years.”

Ohlson and the researchers she quotes seem to fall into the reductionism of believing that the brain is “all there is,” that there is nothing above or behind what happens in the brain that causes it to behave as it does. She writes of “morality . . . as a neurological phenomenon,” of the “underlying biology” and the “biological roots of moral choice,” failing to see that there may be something underlying the underlying biology, something that can’t be measured in a scanner. Joshua Greene, one of the morality researchers, asserts that “There is no single moral faculty; there’s just a dynamic interplay between top-down control processes and automatic emotional control in the brain.”

The hubris is almost breathtaking:  the article’s headline reads, “Neuroscience offers new ways to approach such moral questions, allowing logic to triumph over deep-rooted instinct.”

This type of reductionistic, naturalistic, materialistic, mechanistic thinking, with its implied determinism, conveys a stunted view of humanity that will diminish our perception of human dignity if we allow it. As Christians — indeed, as humans — we must resist falling prey to this sort of selective memory which remembers that we are dust, but forgets that we received life from the breath of God.


(Postscript:  In all fairness, the two articles that I described were quite different.  The first was a formal scientific study in a scholarly journal, the second an article written by a freelance writer for a popular magazine that has to sell copy to survive.  This fact does not affect my central point, however, which is that the reductionism embodied in the second article — and in so much of the literature surrounding particular fields of research — is false, prevalent, and will diminish our understanding of human dignity if we follow it.)

Beauty and the Brain


Part 1 of 2

A close family member of mine is in a rehab hospital, struggling to overcome a brain injury.  This has naturally led me to reflect again on the nature of our brains, the ineffable complexity of this organ that has the consistency of grape jelly, how our brains are related to who we are as humans, what makes a person a person, free will, and the efforts various scientists, philosophers, and ethicists have made to arrive at a conclusion to these questions.  There is a fascinating body of research related to brain function, some of it disquieting (just as it is disquieting to look into our own souls, it can be so to look into our own brains), much of it disappointingly reductionistic.  Too much of the literature surrounding the research draws unwarranted conclusions from the results of experiments, proclaiming triumphantly that “this shows that what we thought were complex and uniquely human functions really turn out to be just the result of these neurons firing in response to those hormones which evolved in response to such-and-such showing that there’s nothing really special about us after all and that free will is an illusion . . .”  Religious devotion, marital fidelity, sexual preferences, altruism — all of these and more have been explained away by unjustifiably materialistic, reductionistic, and usually evolutionary conclusions drawn from observations of brain function.  In the process, human freedom and dignity are maligned.

When I saw this article entitled “Toward a Brain-Based Theory of Beauty,” I thought for sure that I was in store for more of the same triumphant debunking of something — the ability to appreciate beauty — that is unique to humans.  I was pleasantly surprised to find otherwise.  In the study, participants looked at paintings or listened to musical excerpts while lying in a functional MRI scanner.  They were asked to judge each one as “beautiful,” “indifferent,” or “ugly,” and the parts of their brains that lit up with each response were mapped out.  The researchers found that the same part of the cerebral cortex was activated by the perception of both visual and auditory beauty.  In their discussion, the researchers then actually interacted with some philosophical thought on the subject of beauty, before arriving at the conclusion that “Beauty is, for the greater part, some quality in bodies that correlates with activity in the mOFC [a certain part of the brain] by the intervention of the senses.”

Here, it seems to me, is brain research done aright, brain research which respects human dignity.  There are no wild speculations, no debunking, no assumption that “what we’ve observed is the whole story.”  Instead there is humility (“We emphasize that our theory is tentative”), respect for historical human experience and thought outside of science, and the acknowledgement that there is more to beauty than what can be seen with a functional MRI scanner.  This stands in stark contrast to a recent article from Discover magazine with the grandiose title “The End of Morality,” which we will take a look at tomorrow.