ICDs: Autonomy vs. Beneficence

Implantable cardioverter-defibrillators (ICDs) are like the automatic external defibrillators (AEDs) that you see everywhere these days. They deliver a shock to a heart in a lethal rhythm in order to try to restore the heart to a normal rhythm. Unlike AEDs, however, ICDs are implanted directly on a patient’s heart, are constantly monitoring it, and automatically deliver life-saving shocks whenever needed. The statistics are quite clear for patients with symptomatic heart failure in certain conditions: ICDs prevent mortality from sudden cardiac death (SCD), and are the sole effective therapy for prevention and treatment of lethal heart rhythms. And in a recent study in the Archives of Internal Medicine, more than half of doctors were so convinced by the statistical mortality benefit of ICDs that they valued the statistics more than patient preferences in making decisions about ICD placement.

On the one hand, this could be a good thing: here are a bunch of doctors who want to do what is best for their patients (the principle of beneficence). And if there were no downsides to ICDs, maybe it would be less problematic. But for many patients, the tradeoff for decreased mortality from SCD is dying instead from progressively increasing symptoms of heart failure. There are perfectly reasonable patients who, given the choice between the increased chance of a sudden death and the increased chance of a protracted death from heart failure, would choose the former (exercising the principle of autonomy); but if physicians are so enchanted by their gizmos and their ability to postpone mortality that they don’t elicit patients’ preferences — or don’t inform them of the options — then a lot of patients may be getting procedures that they would not want if they knew the full risks and benefits.

Medical technique and technology have come in the last century to wield great power. That power must be exercised with the utmost care, and with the utmost respect for persons and their inherent dignity. Our love for gizmos and all things high-tech blinds us to the fact that all techniques and technologies have unintended and unforeseen side effects. And our love for empirical, statistical data blinds us to the fact that statistics tell us exactly nothing about the person in front of us. Careful exercise of medical power requires that medical practitioners treat their patients not as part of a statistical herd but as individuals, eliciting their individual values and preferences. In many instances in modern American medicine, autonomy has been elevated too highly and led to questionable practices or to medical practitioners abdicating their duties as moral decision-makers; but the remedy for runaway autonomy does not lie in a return to a paternalism in which a doctor makes all of the decisions for a passive patient.

Eugenics in Our Day

Researchers have now developed a technique for doing genetic testing of a fetus by using cells circulating in maternal blood, avoiding the more invasive and dangerous technique of amniocentesis.   These new technical capabilities hail the dawn of a new age of eugenics, or the pursuit of “good (eu) genes.”  With these new technical achievements, physicians can gain knowledge of the child’s genetic makeup as early as 7 weeks after conception.  This can mean a new opportunity for interventions earlier in the pregnancy for the sake of the health of the child or it may provide doctors with more information to inform a decision to abort the child.

Arthur Caplan helped develop guidelines for organ transplants in the 1980s and has for some time pressed for similar oversight of the “wild west” of reproductive medicine, largely because of its eugenics implications.  He is very aware that genetic testing could be used for selecting athletic ability, eye color, or gender.  Sex selection using abortion is already something practiced in countries like India and China, and genetic testing using maternal blood would only make it easier.  However, Caplan is firmly pro-choice, saying that there are good and bad reasons for an abortion.  As Caplan puts it,“Sexism is not a good reason for ending a pregnancy.”

What is missing in this discussion is our response to those with diseases and abnormalities.  To many, a chromosomal defect like Down Syndrome or a physical abnormality like malformed limbs is a good reason for ending a pregnancy.   Too often our attitude to those with abnormalities and diseases is to consider them as unfortunate mistakes rather than opportunities to live in fellowship with another human being.  We think getting rid of the mistake solves the problem, especially when it involves fetal tissue out of our line of sight.  If our drive for perfection bumps into human autonomy, we back off.  If it does not, we proceed in getting rid of the patient if we can’t get rid of the disease.  This is a serious misunderstanding of the ethos of medicine.  An improvement in our ethical strategies will not come from a new set of protocols to use in the clinic.  It will only come about if physicians adopt a new value system concerning the purpose of medicine and develop their character accordingly.

Henri Nouwen, well-known for living in the L’Arche community for adults with disabilities, articulated  a vision of such an ethic when he said, “When we honestly ask ourselves which person in our lives means the most to us, we often find that it is those who, instead of giving advice, solutions, or cures, have chosen rather to share our pain and touch our wounds with a warm and tender hand.”

Click here for a video of Art Caplan discussing gender selection.

Where end-of-life and beginning-of-life considerations collide

This month’s issue of Sexuality, Reproduction & Menopause, the journal of the American Society for Reproductive Medicine (ASRM), carries an article  entitled “’Last-chance kids’: A good deal for older parents – but what about the children?” The article discusses the growing number of older, post-menopausal women giving birth through assisted reproductive technology (ART), and gives a thoughtful analysis of the ethical points surrounding the use of assisted reproduction in women past childbearing age.

As my clinical ethics professor always said, good ethics begins with good facts. The authors of the article provide good, pertinent facts: data not just on life expectancy at various ages, but – just as important when considering the energy needed for parenting – actuarial data on how many of those years are likely to be spent in good or excellent health. (Should we use ART to give a child to a woman who statistically has very little chance of staying healthy enough to raise the child through high school?)

The article continues by asking, “Is reproduction a right?” Remarkably, instead of invoking the free-for-all autonomy that plagues attempts at ethical analysis of reproductive rights, the authors quote an ASRM Ethics Committee’s report that “Reproductive rights protected under the United States and state constitutions are rights against state interference, not rights to have physicians or the state provide requested services … It is also important to recognize that constitutional rights to reproduce are, like all rights, not absolute and they can be restricted or limited for good cause.” Refreshing, to say the least.

The authors continue with a surprisingly candid evaluation of the consequences for the children of these older parents. They conclude with strong cautions about the use of of ART in the elderly which, while falling short of prohibiting the practice, nonetheless give the overzealous practitioner of such techniques reason to pause and consider.

I can understand why a woman might desire to have a child in her older years. However, the inability to conceive a child in one’s 6th or 7th decade of life can hardly be regarded as a disease, and I cannot see any compelling reason why medical technology should be used to treat it. As Dr. R. Landau wrote, “Childlessness is a complex concept, and children are neither medicine nor therapy. They should not be used as means to other people’s ends.” (Quoted in the linked article.)

 

 

FMRI and Normal

Recently I was researching functional magnetic resonance imaging, both for a post on this blog and an article that I am writing for The Best Schools blog. I wanted to look at where fMRI has been used in the clinical setting, and was looking through Functional MRI: Basic Principles and Clinical Applications (2006), which was a very helpful book on the subject. Chapter eight was on “Applications of fMRI to Psychiatry.” In several places the chapter kept referring to testing a person with a particular mental disorder compared to a “normal” patient, but the chapter provided neither a quantitatively nor qualitatively definition of normal. I am not sure if I am missing a technical definition here, or if the definition is assumed.

Brain scanning technologies, such as fMRI, are qualitative measurements. This means your readings are meaningless unless you compare them to something else, preferably a baseline from the same patient. For example, the chapter on “FMRI and Clinical Pain” mentions that fMRI is a good tool for measuring acute pain, but is not as helpful for measuring chronic pain. With acute pain, one can take a baseline reading of the patient while not feeling pain. Then by inducing pain, usually through touching the site of acute pain and an image of the brain can be taken showing what parts of the brain became more active when the pain occurred. In this way, doctors might be able to classify the pain or develop a treatment to reduce the pain. Chronic pain is different because doctors cannot take an adequate baseline (no pain) to then study the neurological response to inducing the pain.

The chapter on psychiatry, however, compares patients with autism spectrum disorder, or attention deficit disorder, or schizophrenia, or manic depression or obsessive compulsive disorder with brain scans of normal patients. Since this technique relies on a baseline for meaningful information, the lack of clarity on what is meant by “normal” makes it difficult to interpret.

Now, I am not saying that the author of this chapter is a eugenicist, nor am I saying that the field of psychiatry is bunk. Furthermore, I am not saying these issues do not having a neurological component to them.  I, actually, am concerned with scientific method here: Are researchers able to obtain meaningful data from these scans when the baseline is 1) a different person from the patient (similar to chronic pain), and 2) is seemingly subjective?

To the authors’ credit, they do point out that as of now “the clinical utility of fMRI to patients has thus far been limited, as no findings have been shown to be diagnostically specific for any psychiatric illness or treatment. Although many hospitals and research facilities complete MRI on psychiatric patients, this information cannot, as yet, be used reliably to generate a psychiatric diagnosis; however scans often are used to rule out the presence of a neurological illness” (185). They seem to be careful not to overstate their case. This is careful science, which is good, but the issue is what is meant by the comparison to normal.

(By the way, neuroscience is an active field. If new research has come out about diagnosing psychiatric disorders, please let us know in the comments section.)

I do not want to make the mistake of quote hunting especially because the chapter is very thorough, but I did want to give a sampling of what I mean by comparing to a “normal” subject. Some of the findings are reasonable, but with others, it seems like the only conclusion that can be drawn is this person’s brain is responding differently from this other person.

Autism spectrum disorder (ASD):

Functional MRI research on autism, although limited, has illustrated that individuals diagnosed with autistic disorder demonstrate an alternate method of facial processing when compared to normal healthy control subjects… In contrast to control subjects, when autistic individuals were asked to respond with a button press to determine the emotion of a facial photograph, they again showed no activation in the left amygdalahippocampal region and left cerebellum. (186)

The patients were people who were diagnosed with ASD and are compared with “normal healthy control subjects” that I am assuming are normal and healthy because they not diagnosed with ASD or any other disorder that would qualify as a mental disorder. This was not stated, specifically, though.

Schizophrenia

Because of the severity of schizophrenia, much fMRI research has been devoted to it. One study that did seem helpful was looking at a patient with schizophrenia before medication treatments, and then after a course of treatment. In this case, the baseline is the patient, himself, so a comparison can be made. Even so, the drug was assumed to be working because the patient’s fMRI looked more similar to the control subjects.

“Mood Disorders”

Depression and bipolar disorder studies are limited because of difficulties with diagnosis. However, studies that have been done have been conducted compared to “nonpsychiatric populations.” This apparently means people that do not meet the criteria for depression, bipolar disorder, or any other psychological disorder.

Certainly there are people who are affected by any of these psychological disorders, and surely many of these disorders have a neurological component. However, I am uncertain how helpful an analytical technique that relies on comparative studies, particularly comparisons to an accepted, yet undefined “normal,” really is for understanding a disorder.

Part 3: Can I Know What’s on Your Mind?

In this third installment concerning military technology, we are going to look at functional magnetic resonance imaging (fMRI). Magnetic resonance imaging is one of the most popular diagnostic tools because it is non-invasive and safe. MRI can be used to determine if a bone is broken or if a tumor is present because it detects differences in tissue density. Various forms of MRI, such as functional MRI or real-time MRI are used to investigate specific parts of the body or specific activities. Functional magnetic resonance imaging analyzes brain activity. The military is interested in using fMRI as a more accurate lie detector than the typical polygraph.

Polygraph tests usually measure changes in physiology that are thought to be associated with lying. For example, it is assumed that a person’s heart rate, breathing rate, and sweat production will likely increase if the person is lying. The lie detector will measure when these factors change compared to a baseline. However, polygraph tests are controversial because they can result in false-positives or can be faked so that the person’s physiology does not appreciably change when he is lying. Therefore a more accurate lie detector is needed. Since fMRI provides information on what part of the brain is active, the theory is that it would serve as a more accurate lie detector.

But does fMRI really show us what someone is thinking? When a particular area of the brain becomes active, it consumes more oxygen. The body responds by sending oxygenated blood to the part of the brain that is actively consuming oxygen. FMRI measures this blood flow. This is the observed phenomenon. The assumption is that this correlates to a particular thought pattern. Furthermore, many of these assumptions are based on the idea that there are regions of the brain where certain functions take place (such as the memory part of the brain, or the decision-making part of the brain), which is also a controversial. Scientists who use fMRI for lie detection assume that a lie is neurologically more complicated than the truth, so if someone is telling a lie, his fMRI scan will show a more complicated pattern.

Importantly, while fMRI may be advertised as being more precise or definitive, it is still a qualitative measurement, just like the polygraph. As National Academy of Science magazine, In Focus, suggests, “But brain scans encounter the same problem as polygraphs: no physiological indicator, or neural activity pattern, exists that has a one-to-one correspondence with mental state.” Furthermore, because of how fMRI acquires a signal, there is approximately a 6-second delay between the brain signal and the image display, meaning that the actual part of the brain that becomes active in response to a stimulus is still only an estimate.  Researchers have been working on improvements in the time lag. For example, they have looked at heart activity using “real-time MRI.” However, neurological activity is very fast, and blood flow is relatively slower, so there may be a fundamental issue with relating blood flow with certain neurological activity.

Tennison and Moreno discuss in their article on military technology the ethics of using brain scanning technology for lie detection. They focus on whether brain scans would violate the guarantee against self-incrimination, and whether they would constitute an inappropriate search and seizure. I would say that the bigger ethical question is amount of legal weight we should place on a technology that is qualitative and subjective. Should brain scans be considered definitive proof that a person is lying? Technology helps us in many ways. DNA data has exonerated and incarcerated many individuals who might have been given the wrong sentence. But we should be careful how much we can trust the technology. Yes, the fMRI can show us brain activity, but it does not show us a man’s thoughts.

Embryos from laboratory produced eggs

The London newspaper The Independent recently reported that a researcher at Edinburgh University is ready to seek permission to try to produce human embryos by the fertilization of mature egg cells that have been produced from ovarian stem cells in the laboratory. The research team has taken immature human egg cells produced from ovarian stem cells by as researcher at Harvard and transformed them in the laboratory into cells that appear to be mature human eggs. The proof that they are mature eggs will be obtained by showing that they can be fertilized to produce human embryos. The embryos will then be frozen or destroyed since they are being produced for research and English law requires that they not be allowed to develop past 14 days.

The obvious ethical question is “Should we do this?”

Those who support doing this see the ability to develop fully functional human eggs from the stem cells found in the ovary as a way to provide the ability to have children for women who are past the time that their ovaries naturally produce eggs. They also express hope that the ability to produce new egg cells might be a way around the loss of ovulatory function that is associated with the development of menopause and its attendant problems. It would also be a way of producing a much less limited supply of eggs to use in research including cloning.

But should we do it?

Ethical concerns abound. Is it worthwhile to create and destroy human embryos to prove that a scientific technique is doing what it was designed to do? Is there any way to determine whether children born with the use of eggs developed from stem cells in the lab are at increased risk for defects without subjecting some children to those risks? How could you justify doing safety studies on children produced by this technique who could not give their consent? Would attempting to delay menopause by inducing the production of new eggs within aging ovaries be a good thing to do? Is it really good to make it easier to do things like human cloning?

For those of us who conclude that human embryos have full moral status it is clear that producing human embryos in the laboratory to confirm that this technique is successful and then destroying those embryos is wrong. Even those who do not think that human embryos have full moral status have reason to think that this is not a good path to start down from concern about the safety of the people who could be born using this technique.

This is one of those things we should not do.

The Virtue of Human Development

New York University bioethicist S. Matthew Liao has recently proposed giving people drugs to predispose them to make decisions in favor of programs working toward climate change:

Yes. It’s certainly ethically problematic to insert beliefs into people, and so we want to be clear that’s not something we’re proposing. What we have in mind has more to do with weakness of will. For example, I might know that I ought to send a check to Oxfam, but because of a weakness of will I might never write that check. But if we increase my empathetic capacities with drugs, then maybe I might overcome my weakness of will and write that check. (1)

What Liao is talking about is something still closely tied to beliefs: the will.  Jonathan Edwards spent a good bit of his time writing about the close relationship between these two aspects of human character.  If all that was needed was a little perk-me-up to help out a sleepy donor, then we would prescribe a cup of coffee.  However, beliefs and the will are both components of human character and therefore are changed and molded by the process of maturation.  And the maturing of a person takes place in relationship with other persons, in relationship to God and in relationship to other human beings.  This is the heart and soul (literally) of the human experience.  Theologians oftentimes use the term sanctification to describe this change within the person as a result of the action of God.  This process is ultimately directed toward Jesus, the Mediator who opens the door for making the human heart living and the One who is the New Adam—the One who is human in the truest sense.  Pharmacological manipulation of human behavior seeks to short-circuit the process of human development, thereby essentially taking away that which is truly human.  Just think: if the literature describing the story of human struggle and development were eliminated, our libraries would be largely empty.  A person no longer growing in relationship with God and with others would be less human.  The manipulative means would have done great harm in pursuit of the end behavior.

The renewed interest in virtue ethics in recent years may serve to steer us away from further attempts at manipulation in favor of choosing a path of maturity.

I have always marveled at how Meda Pharmeceuticals markets their version of the muscle relaxant carisoprodol as Soma because of the name’s negative connotations.   Maybe it has no negative connotations at all.

By this time the soma had begun to work. Eyes shone, cheeks were flushed, the inner light of universal benevolence broke out on every face in happy, friendly smiles. (2)

1.  Anderson, Ross.  “How Engineering the Human Body Could Combat Climate Change.” The Atlantic,  March 12, 2012.

2.  Huxley, Aldous. Brave New World.  HarperCollins, 1932 (2006).

Cyborgs and Design Constraints

A recent article in BBC News asks the question: Can we build a “Six-Million-Dollar man”? If that reference is lost on you, the Six-Million-Dollar Man was a made-for-TV movie and television show that aired in the 1970s based on a book, Cyborg. The main character was an astronaut who was in a debilitating accident. He was equipped with bionic legs, left arm, and left eye and with these bionic features was able to save the world using his super-human abilities.  The underlying point of the reference is to ask if we can go beyond prosthetics and enhance the human body beyond its normal capabilities.

Ironically, many cyborgs in film, television, and literature are people who suffered from some sort of trauma causing their bodies to become vulnerable, or to operate as sub-standard levels. Examples include Darth Vader/Anakin Skywalker, who became a cyborg after almost dying in an epic battle; Luke Skywalker, who lost his hand in another epic battle between him and his cyborg father; Robocop, who was a cop that almost died at the hands of a gang; Ironman/Tony Stark, whose heart was irrevocably damaged when he was kidnapped; and the already mentioned Six-Million-Dollar Man. Rather than restoring their bodies to their previous level of mobility and functionality, these characters are enhanced to amazing levels. (Although Luke Skywalker’s enhanced abilities do not come from technology but from mastering the Force, an important point in Lucas’ films).

The article asks whether we are at a point where enhancement to super-human abilities is possible, and offers the example of humans being able to run at 60mph. While this may have every science fiction fan salivating, there’s this small problem of design constraints:

Bipedalism was not really designed for that kind of running. There’s considerably more efficient ways of moving at 60mph. I don’t know if there’s enough benefit to overcome the difficulties of 60mph running speed…It might be possible to attach a bionic arm with enough strength to lift a car. However, actually doing so could cripple the rest of the body. Falling over while running at 60mph could be equally damaging.

 

The human body is a work of engineering with all of its integrated parts interacting as a functioning whole. One does not need an anatomy and physiology class to understand this; just throw out your back or injure your hamstring and see how integrated your body really is. Or run in a pair of bad running shoes and see what happens to your feet, ankles, knees, hips, and back. Every movement employs a series of muscles, tendons and joints, not to mention the neural networking required to tell your body to make those movements. It is an interacting whole, and like any piece of engineering, there are design constraints.

Our culture has an obsession with enhancement. In this sense medicine is not about healing; it is about conquering. But what is it that we are conquering? The transhumanists would say that we are raising our fists at Nature by taking control of our own evolution. No longer are we going to be the products of chance and necessity; we will take it from here and will be the products of our own making.

I think if we are honest with ourselves, what we’re fighting against is our own frailty. We want to watch athletes conquer world records. We want superheroes that are stronger than all of the bad guys. We want to see man on the top of the tallest mountain or on the moon or surviving in the wilderness. We want to feel like we are not nearly as vulnerable as we really are.

Perhaps for some of us, we want solace that maybe someone has conquered the very thing that horrifies us the most about our frailty: Death. Death is confounding. Why do creatures like us die like an animal? We can create, have consciousness, are individually unique yet also relationally connected, have ideas, and contemplate our own mortality.  With every world record, every amazing feat of ingenuity, achievement, and technological advancement that pushes our design constraints, there is a background hope that we are one step closer to overcoming our ultimate enemy.

Of course, the BBC article is not talking about immortality. It is only speculating on running faster or lifting heavier objects. But the subject is so tantalizing because, “Eventually you reach the point where you can start doing things that normal people can’t do…” The point isn’t to be “normal” or to restore normal function. Normal people can get in a car wreck, can lose an arm, can go blind, and can hurt themselves doing mundane things. Normal people die.* The point is to be anything but normal. But design constraints place limits on just how far from “normal” we can go. We will never be able to out-run or “out-react” or out-smart every danger. Even if we somehow overcame one design constraint, another becomes more pronounced to the point that what may have started as an enhancement in one sense becomes a detriment in another sense.

The “Six-Million-Dollar-Man” idea is only feasible to a point. It will not save us and it will not give us the resurrected body that we ultimately desire.

 

 

*See Isaac Asimov’s Bicentennial Man for an interesting take on this concept in regards to robots with human qualities, the opposite of a cyborg, perhaps.

Technology and health care reform

 

A couple of letters in this week’s Archives of Internal Medicine provide a picture of some of the more perverse incentives to overuse technology that are built into our current health care delivery “system.” One letter describes a study of proton beam therapy for treatment of prostate cancer. Proton beam therapy has never been shown to be superior to standard photon-based therapy for the treatment of prostate cancer; it is, however, novel, high-tech, “cool,” and way more expensive. The study showed that the mere availability of the technology, rather than any clinical indication, drove its utilization: “If you build it,they will come” (and spend!).

Another letter addressed the systemic factors that influence physicians to use more technology, whether clinically warranted or not: “The sheer amount of technology available may lead some [doctors] to look askance at the value of their clinical skill and bypass them in favor of testing. This can lead to a technological addiction that is every bit as difficult to break as a substance addiction.”  In the reply to this letter, the authors wrote of “several systemic factors that promote a ‘more is better’ approach: a reimbursement system that rewards diagnostic testing while failing to provide physicians enough time with patients to avoid it; performance measures that reward doing more with no attempt to measure doing too much; and a malpractice system perceived to expose physicians to legal punishment for doing too little but not for doing too much.”

The incentive to use more technology is not only inherent in the nature of technology itself (see Jacques Ellul’s The Technological Society), but is built into the fabric of our health care “system.” The cost of that technology is a large part of what is making health care unaffordable for all except the healthy. Any health care reform scheme that does nothing to change these structural incentives is so much wind. The reform schemes put forth by the two major political parties are pathetic, cosmetic band-aids that do nothing to get even close to the root of the problem (“Uh, let’s find different ways for people to buy insurance!”). Such band-aids amount to a joke; only it’s hard to laugh when so much is at stake.

 

Sources:  Aaronson et al., “Proton Beam Therapy and Treatment for Localized Prostate Cancer: If You Build It, They Will Come,” pp. 280-282; letter from Volpintesta, “Training in Uncertainty Has Value for Primary Care Physicians: Overreliance on Technology Can be Remedied,” p. 297; and the reply by Sirovich et al., p.297, Archives of Internal Medicine, Vol 172 (No. 3), Feb 13, 2012.

The ethics of mind-reading

 

A study that sounds like the stuff of science fiction was recently published in PLoS biology (If you don’t speak Scientific Gobbledygook, it is translated here). In the study, scientists were able to identify the words that human subjects were thinking by analyzing the electrical patterns in certain parts of their brain. Scientists hope that some day this line of study may lead to techniques that would allow people who cannot speak, because of some type of brain damage, to communicate by direct neural control of devices that would, literally, read their minds and speak for them.

In his book The Technological Society Jacques Ellul described the characteristics of technology in modern society. (Actually, he wrote about technique, of which technology is a subset.) One characteristic, which he termed monism, is that a technology tends to spread and be applied everywhere it can be applied without regard as to whether it is a “good” or “bad” use, because monism “imposes the bad with the good uses of technique.” Ellul provides many examples to back up his assertion.

The type of “mind-reading” described in the PLoS article is in its infancy, and may never progress beyond the stage of interesting but not very practical experiments. But it is not difficult to imagine the sort of pernicious ends for which such technology might be used if it lives up to the hope of researchers and ends up in the wrong hands — say, the paranoid rulers of a modern security state. It is not difficult to imagine what someone with wrong intent or motives could do with the power to see into another’s mind. And if Ellul is right, there will be a natural tendency for the technology to be put to such uses.

Rather than simply be reactive, the job of bioethics must be proactive, to even now, in the infancy stage of such technology, be placing safeguards around its uses to try to ensure that its potential benefit is realized while its potential threats to human thriving and dignity are thwarted. The attempt to limit technology’s application, to shepherd it into what we consider ethical uses, will go against all of the inherent tendencies of technology. It will go against all of our society’s unquestioned faith in the benefit and rule of technology. But it is necessary if such technologies are not to be used in the hands of some to wield a terrible power over others.