End-of-Life for a Major Hospice?

San Diego Hospice may not make it.

That’s the news out here for an organization that is described as “iconic in the hospice world.”  Last November, a Medicare audit concluded that the government had been overcharged—a lot—by San Diego Hospice.  Apparently the problem was filing claims for people who weren’t quite sick enough—expected to die within 6 months—to require hospice care by the Medicare reimbursement rules.  And, sometimes, people get a bit better, so they outlive expectations and aren’t supposed to qualify for continued Medicare reimbursement, but the hospice continued to file claims.

The immediate repercussions have been catastrophic—a Chapter 11 bankruptcy filing, a 30% shortfall of operating expenses, layoffs of about a third of employees, with more likely on the way, and a reduction in inpatient hospice census from 1000 to 450, so the local paper reports.  The CEO says they may not “have a viable organization moving forward.”  How much does Medicare think it should be refunded?  Try a number north of a million dollars—perhaps well north of that.  At least, that is the worry.  Just how much money we are talking about may not be clear to S.D. Hospice—and if so, that may be part of the problem.

The publicly-available information is still incomplete.  I have not seen allegations or speculations about poor management or fraud, and I certainly am in no position to speculate on the details of the case.  My first reaction was to think that an organization pressured by tight reimbursement rules got out “over its skis,” as it were, trying to get paid as much as legitimately possible, and in the process made some mistakes.  I don’t know.

There will still be hospice service available in San Diego County.  Competitors are emerging.  But I hope this will not mean that medically-appropriate palliative care will get too squeezed by payment rules—indeed, at a time when the merits of good routine palliative care is being more prominently discussed, at least in the oncology publications I try to keep up with.  And I guess it means that even for non-profits, money matters, and “good business management” is not a dirty word.

A story to be followed…

Conscience, Data, and the Burden of Proof

Dr. Susan Haack’s recent posts on conscience, and the ongoing struggle over the HHS regulations on mandatory insurance coverage for contraception under the Affordable Care Act, demand more careful further reflections than will fit in a blog post, but I will dare to stick a toe in nonetheless.

In The Line Through the Heart: Natural Law as Fact, Theory, and Sign of Contradiction, J. Budziszewski argues (see pp 8-15, for example) that “deep conscience,” which is “rooted in the constitution” of all humans, is a cardinal indicator of the existence of a natural moral law.  Deep conscience “remembers” general moral norms (including, he argues, the Decalogue).  I’d take this to be Dr. Haack’s “antecedent” function of conscience.  Budziszewski then distinguishes three “modes” of conscience:  cautionary, accusatory, and (for lack of a single term) confession/reconciliation-seeking—the “consequent” functions Dr. Haack mentions.  He would certainly agree with Dr. Haack (as do I) that conscience points to a transcendent authority.

Presumably (me talking now, not Budziszewski), we form correct moral convictions by agreeing with deep conscience about moral truth.  However we arrive at those convictions, we can argue that they too have “antecedent” functions in that they are, if properly understood, sufficient to motivate ethical behavior.  (I just glossed over a major discussion in ethical philosophy that I ask the reader to accept for the sake of argument here.)  Convictions do not, however, produce a sense of guilt, accountability, or of a need for reconciliation.   Conscience does that.   Whether we recognize it or not, conscience is witnessing to our accountability before God.  People who deny God’s existence, however—and who may well also interpret “guilt” to mean a response to bad-faith intimidation by the organized church—can still coherently claim, it seems to me, to act out of conviction with accountability to the community, as long as the standard is some sort of community-recognized norm.  In a pluralistic society, one can appeal to positive law or what we can agree on; or, alternatively, one can appeal to the shared understanding of what it is to be an autonomous moral agent (as I take the German philosopher Jürgen Habermas to do).  Just don’t plead metaphysics.  But the appeal to convictions is not ripped from its community connections—it depends on them, just in a different, but critically different, way.

And that, of course, is the problem.  People like me are making a metaphysical argument (actually, I want to argue for a form of natural law) in a positive law world.  Some of the “positive lawyers” claim that their convictions are objective, not relativistic, because they are available to observation, as in the natural sciences, so we can agree on them, revising our understanding as we get new information.  We are left with a sort of “naturalist’s natural law.” I think that is irredeemably relativistic, in the end—if God is dead, nothing is out of the question.   I understand Budziszewski to agree.  He criticizes the “positive/natural lawyers,” if you will, of pursuing a “second-tablet project”—that is, isolating the “second [stone] tablet” of the Decalogue (Commandments 5-10) from the more explicitly God-directed first 4 commandments of the “first tablet.”

So what?  First, I would submit that the “conscience/convictions” argument doesn’t help all that much in cases like the HHS mandate.  The issue is how much room to give to particular metaphysical stances—the public/private square problem.  Pluralistic norms vs religious freedom is still the battle.  And it will not do to say that profit-seeking makes the moral application of metaphysical commitments illegitimate.  To put the fine point on it, Hobby Lobby’s owners ought to be accorded the same freedom of conscience as are the Catholic Church, or a church-run hospital, or Wheaton College (for example).  I worry, perhaps too much, that bioethicists in particular worship at the altar of non-profit status in ways that risk serious mistakes.

Second (and cf. the recent post by Dr. Joe Gibes), statements like “[The] lack of any substantial evidence for post-fertilization effects [of emergency contraceptives] may significantly weaken conscience claims, and may militate against refusals to dispense or to refer,” [Lewis and Sullivan, Ethics & Medicine 28:113-120, 2012] will not do.  Failure to prove is not disproof.  Absent definitive data, prohibition of emergency contraceptives may be weakened.  But without definitive data—which may not be accessible by ethical experiments—sufficient to free the conscience of concerns, conscience claims of someone with a reasonable doubt about what the data mean ought to be vigorously defended, even against a strong majority consensus.   We should not let a prevailing tide of naturalistic, “data-driven” ethics confuse our use of the data in service of true moral precepts.

Reminders of the Challenges to Informed Decision-Making

Two recent reports remind their readers how difficult it can be to ensure that a person making a decision or expressing a preference about his or her medical care is doing so with proper information.

First, in the journal IRB: Ethics and Human Research, Kim et.al. ask the question, “Research Participants’ ‘Irrational’ Expectations:  Common or Commonly Mismeasured?” (Article free to the public, registration required.)  They cite the oft-raised concern of “therapeutic misconception” in clinical research:  people who volunteer for certain clinical trials often misunderstand that the primary goal of the research may not be to demonstrate a treatment is effective, or they think that their enrollment has a better chance of benefitting them, or even a known chance, when that is not the case.   Or, they do not grasp fundamental features of the research, including (or, better, especially) random assignment.  In their study, Kim et.al. found what they suggest is evidence that people may understand randomization perfectly well, but not apply it to their situation or appreciate its meaning for them.  They studied people with Parkinson’s disease enrolling in a randomized trial of gene therapy vs. a sham procedure, and they found that while study subjects could readily demonstrate that they knew what randomization was, and what it meant for the likelihood of being assigned to one study arm or the other, when they were asked what group they thought they personally would be in, many professed ignorance or some level of certainty that they would be assigned to the arm they preferred—viz., the treatment arm.  The study suggests (and the authors say so) that this is perfectly understandable and reasonable human behavior, and that the discrepancy does not mean the subject has been misled or is not intelligent.  Put another way, researchers should not look down on people who appear to overestimate their likelihood of personal benefit, or conclude that such an overestimate necessarily implies a deficiency in the informed consent process.  It also can be read as a bit of fresh air for conscientious clinical researchers who worry that—or are besieged with accusations that—they are taking undue advantage of sick people who want to get better.

The second article, in the Journal of Clinical Oncology, comes from the “Video Images of Disease for Ethical Outcomes” consortium—“VIDEO” for short.  (Everything has to have a slick acronym.)  In this study (subscription required), investigators from four major cancer centers studied 150 people with advanced cancer and an expected survival of less than a year.  They asked the cancer patients whether they would want cardiopulmonary rescuscitation (CPR) performed if their heart stopped beating.  Everyone was read a scripted description of CPR with an estimate of its likelihood of success, then half the people were randomly assigned to also view a 3-minute video that showed simulated chest compressions on a mannequin, and images of an actual ventilated patient receiving medications, with the script also included via a narrator.  The underlying premise was that words alone are insufficient to give people an appreciation of what CPR entails.  People who saw the video were more likely to say “NO” to CPR.  (The study did not include whether anyone actually had CPR performed.)  Women, white people, and people who had higher health literacy (an uncommon trait in our society) were also more likely to say “NO.”  Three-fourths of the people who saw the video said afterward that they were “very comfortable” watching it.  The study authors describe steps they took to keep the video or narrative from being alarmist or unduly influencing, and state that pushing a patient one way or another is a real concern for decisions like whether to plan to do CPR at the end of life.  (I couldn’t help but wonder what would happen if people with operable cancers were shown videos of their proposed surgery before deciding to have it—would they be less likely to consent to a procedure with known benefits?)  But when it comes to informed consent issues in general, my impression is that audio-visual tools to aid the decision process are generally viewed as helpful, and we should anticipate greater use of them.  It seems to me that context and equipoise are critical—context, in that any decision-aid tool should be used in the setting of a relationship of open communication and trust between patient and physician, and equipoise in the sense that the physician must have the patient’s welfare and free choice clearly in view, without competition from subordinate goals (like cost-control, public or private).

DNA Research and (Non)Anonymity

Last week, the Wall Street Journal reported on a paper in the journal Science (article free with registration), regarding the ability to identify supposedly anonymous donors to genetic research.  Science carried an accompanying perspectives article and news summary.

The upshot:  Imagine a fictitious Mr. Hogswobble (we’ll call him “H” in view of my limited typing skills), who donates a blood sample so his DNA can be sequenced as part of a study of genetics of a larger number of people, with the goal of learning something that can eventually help diagnose or treat human disease.  H does this because he wants to support good science and medicine, but he’d rather not have his identity known, on the off chance that it could make it harder for him or his family to get insurance, for example, at some unknown time in the future.  So the researchers tell H that they will do everything they can to keep his personal identity anonymous.  He will not be identified in any scientific publication.  The sample and the data gained from it will be “deidentified;” i.e., no personally identifying information, like his name, initials, Social Security Number, etc will be kept in the same place with it.  Maybe there is such a linking record somewhere, maybe not, but if there is, it is under lock and key and held securely.  His sample is given a unique identifier—maybe a number, like “43” (was that the number in The Hitchhiker’s Guide to the Galaxy?).

But the de-identified specimen and data are made publicly available, in the interest of open access for other scientists to work on it.  This kind of sharing is critical to the free operation of good science.  Critically, to make scientific sense of it, it likely includes certain “metadata,” such as H’s name, his country or state of residence, how old he was when the sample was obtained, maybe even some level of medical information relevant to the scientific research.  But most people looking at the data could tell only that it comes from some guy, not from H personally.  Now, to be sure, this information could be used to narrow down the field substantially—there are only so many 55 year-old men in California, for example—but other information on how many of those had, I don’t know, hypertension, let’s say, would NOT be readily had because of privacy laws governing medical records.

The problem is that we as a population freely make lots of other information about ourselves public.  (No, I’m not including whether we own a gun, I don’t want to go there.)  That’s where the researchers on the Science paper worked.  The “metadata” were huge in their work, but the treasure trove was a public genealogy service.  Send us a sample for your DNA, and some personal information (like your name), and we will make all of that public to help you and similarly-interested people find your long-lost relatives, for whatever reason you or they have for being interested.

So there are two public databases—the one more limited one with the DNA data and some metadata, and the broader one with DNA data and names—including, quite possibly, one or more men named Hogswabble.  From the first database, the scientific research one, a list of genetic markers can be obtained—in this case, ones called “SNPs,” but we will call them, collectively, “Steve”—and that list can be compiled, then compared with the genealogy database to see how many H’s have DNA with “Steve” in them.  That gives one a guess of whether any of the donors for the first study—the supposedly anonymous donors—are named Hogswabble.  Surf the ‘Net for other publicly available information and these researchers could finger the identities of 50 donors to an actual scientific study intended to study a total of 1,000 people.  So that’s 5%–50/1000.

A fundamental ethical tenet of human subject research is that measures must be in place to protect the privacy and confidentiality of research subjects.  But in the age of big data and research on genomics and other large-population based biologic matters, assurance of confidentiality can seem like it’s founded on quicksand.  What to do about it?  Take whatever measures reasonably can be taken.  In the informed consent process, tell a research subject that it is simply not possible to provide an absolute guarantee of confidentiality.  Train researchers on ethical behavior—“do not hack Steve,” for example—but realize that in an open-source environment, the sort of steps described here could be done by just about any smart wise guy with Internet access.  Limit the amount of metadata available; NIH is doing just that, although some tough judgment calls may be involved.  Limit the availability of data?  Now things are getting touchyBetter not to over-react, the scientists reasonably counsel.

Laws are in place, such as “GINA,” the Genetic Information Nondiscrimination Act of 2008, to prevent at least some types of discrimination (e.g., health insurance, employment) based on genetic information.  These issues are with us to stay.  In medical research, protecting our privacy and confidentiality has limits.

More on “Shared Decision Making”

Back on November 27, I posted on shared decision making, or SDM for short, and opined that in broad brush, this seems like a mom-and-apple pie initiative, with the goal of encouraging better communication, informed by better data more clearly communicated, about an individual’s medical decisions.  Central to that effort is the desirability for tools—written, audiovisual, and the like—that support the decision by making complex medical matters accessible to the average person, who is likely not to be sophisticated about medical or scientific matters.

Now, in a recent “Perspectives” article in the New England Journal of Medicine, Emily Oshima Lee and Ezekiel Emanuel urge more formal efforts to develop, certify in some meaningful sense, and use these decision-support tools.  The discussion strikes me as similar to a long-standing parallel concern about how to develop better consent forms for human subject research.

The kicker is that the authors urge a strong active stance by government to mandate the use of such tools.  This would serve three goals:  “promote an ideal approach to physician-patient decision making, improve the quality of medical decisions, and reduce costs.”

Although the authors of the article seem clearly to endorse a strong physician-patient relationship, with clear communication and decisions aligned with the patient’s values, the assumption is that in the preponderance of cases involving aggressive or costly care, or difficult decisions, the cheaper course of action will also be the medically appropriate course and the one that patients will prefer.  That may often be the case (and they cite reports from groups like the Kaiser Foundation to that effect), but when the patient prefers the more costly approach, there could be a conflict, to say the least.

The government would approve the tools, and, in order to ensure not only their use but the chance to collect data about the effect of using them, would demand they be used in Medicare, on pain of reduced reimbursement along the scale currently imposed on hospitals if they have to re-admit too many patients too soon after discharging them.  The CMS, the agency that administers Medicare, has the legal authority to proceed, that authority having been granted in the Affordable Care Act.  All they have to do is write the regulations, and put them into effect after the legally required advance notice (with open comment period) to the public.  CMS would mandate the use of the tools, not specific decisions about care—at least, they would not necessarily mandate specific decisions, not initially in any event.

Over at his “Human Exceptionalism” blog, Wesley J. Smith worries that this constitutes “the bureaucrat looking over your doctor’s shoulder.”  On one level, I’m not so worried—in my experience, the government is generally an accurate source of summary medical information, as on the NIH websites, for example.  Further, I bet the authors would insist that they do not mean to override individual decisions by a patient and his or her doctor.  It’s an open question how much the decision-support tools that eventually are derived will be written to push decisions one way or another.  In its human subject protection rules, the government is quite concerned, from at least the Belmont Report since, to guard individual safety and choice, so I am loathe to jump to conclusions here.

 

On another level, the goals do not exactly align with what I understood SDM to entail when I wrote about it before.  I thought that we were talking about helping a patient understand choices and clarify his or her values in conversation with the doctor, to support as informed a choice as possible—realizing that fully informed consent remains an elusive goal.  I didn’t think cost control was part of the deal.  While it is important to control costs, and it’s important that doctors not practice ineffective medicine, especially when to do so is expensive, cost control per se seems to me outside the boundaries of what I’d consider SDM to entail.

Further, I think we encounter again the tacit assumption that “data” will usually underwrite unambiguous, general rules that apply to all, or nearly all, medical decisions of a given kind and that are unencumbered by scientific controversy.  Regular readers of my posts will recall that I am suspicious of that assumption.  Add to that an assumption that government officials are especially if not uniquely equipped to create the decision-support tools, analogous to the HHS’s recent suggestion that it develop a single, national informed consent form for use in all U.S. clinical trials, a suggestion that I understand is being questioned by significant parts of the clinical research community.

Were this approach to SDM being taken by the medical insurance industry, we’d rightly be concerned that profit maximization was the goal.  But if the government—which pays roughly half of all health care costs in the country, and whose lead on payments is often followed by the private sector—is driving the process, should we be more confident that it has the patient’s interests at heart?  When does the government move from being a facilitator to a driver of care?  And should we really care?

These questions may be moot.  This interpretation of SDM is empowered in law and is probably coming.  Perhaps it would be a development of a more privately-driven approach, as well.  It seems like large organizations and rule-making regimes (I choose this language rather than the ill-suited term “system”) are engulfing more physicians’ regulated participation through mechanisms like direct employment, large contracts, etc, so this all may be the tide of history at work.  But we—patients and physicians—may need to read the print—which hopefully will not be too fine—carefully.

Sentience, the Image of God, and Human and Animal Souls

Not to steal Jerry Risser’s topic, but I think a further response to his last two posts on sentience warrant a separate post, not just a comment…

To start:  I heartily endorse Jerry’s analysis, and I agree with him that human moral agency seems to be a fruitful approach to addressing the moral status of animals.  As Dr. John Kilner suggested in his comment last week, one may be concerned that the AAHA’s statement cloaks an agenda, in which the uniqueness of human status in creation is obscured by a sort of mirage in which the raising of animals’ status serves, in part, to pull human status down, creating, as it were, a blurred “horizon line” between man and beast.  But the issue is one of metaphysics, if you will, not just ends and means.

Jerry’s key point is that anthropology is the correct starting point.    This means asking what is the essential nature of humans, not “just” what is their standing in creation.   Here, I believe that reflection on the soul, such has been done by J.P. Moreland, may help.  Recall that Moreland takes a “Thomist” view of the soul, understanding it to be the “substantial, unified reality” that informs an individual’s entire being, grounds all of that individual’s ultimate capacities, is capable of existing in different states, and possesses different faculties.  Also, if I understand Moreland (and Scott Rae) correctly, we should distinguish between a being’s ultimate capacities—what it is capable of when fully developed and functioning—and its “capabilities,” which are “realized” or actualized capacities that can be actualized to greater or lesser degrees at different points in an individual’s existence.  It seems to me that this distinction between capacities and capabilities is real.  We are on shaky ground indeed when we attempt to ground moral status on capabilities (realized capacities), which are degreed properties.

Now, Moreland—and, if I am correct, Aristotle and Thomas before him, and, in contemporary days, Leon Kass—holds that animals do indeed have souls.  Indeed, Moreland says, so teaches the Bible.  But Moreland identifies several human capacities that do not characterize animals’ souls (for what follows, see Moreland’s booklet “What is the Soul?”, especially chapter 4):

  • Libertarian freedom of the will—and therefore, moral agency (as Jerry pointed out)
  • Ability to distinguish between desire and duty
  • Ability to entertain abstract thoughts
  • Ability to distinguish true universal judgments from mere generalizations
  • Awareness of themselves as selves, which envelopes [animals’ lack of] “desires to have desires, beliefs about their beliefs, choices to work on their choices, thinking about their thinking, and awareness of their awareness”
  • Finally, Moreland does not accept that animals possess language, which he argues requires symbols and not just signs.

Note that none of these bullet points is necessarily theistic in origin and none comes from a straightforward exegesis of Scripture.  But the implication, Moreland says, is that animals have souls and value before God, but not the intrinsic human dignity people, who are made in God’s image, have.  Humans “do not have duties to animals, [but] duties with respect to animals.”

This is all a longer way of endorsing Jerry’s “moral agency” approach.  But I must also add this: to get there, whatever one concludes about a narrow exegesis of the term “image of God” in scripture, one must allow that being in the image of God means something about the essence of man and woman—about what kind of beings we are.  I think that point is an indispensable starting point of a biblical approach to bioethics, and I find what I understand to be a more minimalist reading that the image of God is “a status and a standard” to be deeply, deeply unsatisfying.  I also think—forgive me, Dr. Kilner, for casting all humility aside here—that “the conclusion that animals matter much less than people because they are not God’s image” is NOT fallacious.  If you really believe that position is fallacious, then I submit you need to be prepared to negotiate with Jerry’s grizzly bear.

PS: Jerry’s emphasis on “responsible stewardship” echoes the current Presidential Commission on the Study of Bioethical Issues, which proposed “responsible stewardship” as a guiding principle in its statement on synthetic biology a couple of years ago.

An Early Nominee for a Top Bioethics Story of 2013

We’ve just come through the 2012 retrospectives season.  Rather than try to recount the top bioethics stories of 2012—a worthy task, to be sure—allow me to nominate one prospectively for 2013: the Supreme Court decision on the Myriad Genetics case.  Reports in the general press say that the Court will hear the case this coming spring (with a decision sometime after that).  I am not a patent attorney, so any discussion I can offer here will be ignorant about the legal nuances, and I must therefore be reserved.  But  as I understand the case, at issue is whether a gene sequence that is found in nature—in this case, the BRCA1/BRCA2 genes—can be patented.  Now, methods to target those genes, or their products, can be patented in the course of, for example, drug discovery and development.  And a specific method of assaying the gene can be patented.  But can the sequence itself be patented, and what does that say for any intellectual property rights around the interpretation of the results?

I am in the camp that is suspicious of patenting actual gene sequences—normal or mutant—as opposed to methods to assess them or to interdict the consequences of their biologic activity.  Of course, a decision in that direction would invalidate the Myriad Genetics patent, breaking their monopoly on the BRCA1/BRCA2 test—and lowering the price of testing in the process.  Some entrepreneurial opportunities would be hindered as a result, but my overall impression is that not only academic research, but the possibility for competing tests and for lower costs for personalized medicine in the process, would be enhanced.

Either way, it will be of keen interest to see the breadth of whatever decision the Court reaches, and its implications for other patents in biotech.  Rather than speculate here, I will wait for the decision.

A Couple of Other Bioethics Blogs Worth Checking Out

“Merry Christmas to all, and to all a good night….”

I am mailing it in on the holiday.  To do so, I thought I’d encourage readers of this blog to check out at least two other bioethics blogs that may not always be linked in the usual places.

One is Wesley Smith’s “Human Exceptionalism” blog at National Review Online.  Posts are reasonably frequent (although none in December 2012), with an emphasis on human dignity concerns (end-of-life, distinguishing human and animal moral status), biotechnology and risks of commoditizing human life, and the implications of the Patient Protection and Affordable Care Act (aka “Obamacare”).  Conservative, both socially and politically, as I take conservatism generally to be understood in today’s America.

The other is the “Over 65 Blog” at the Hastings Center website.  Organized by Daniel Callahan and colleagues, this blog focuses on reflections by senior citizens on health care, health policy, and generational issues raised by those and by the aging of our population generally.  The five stated goals are “a stronger role for seniors, self-determination, more care/less technology, confronting the cost problem, and the economic and family needs of the over-65 generation.”  The general perspective is more progressive or at least center-left.  Last week brought a really interesting post from Alicia Munnell, a management professor at Boston College, refuting the idea that older people working longer will mean taking jobs from the young.

I think both are thoughtful and authoritative, and welcome respites from the talking-point fiascos of broadcast journalism.

Don’t Forget the Soul

I believe that if we are properly to defend human dignity and limits on what we will be willing to attempt with biotechnology, we must do our best to define and defend an essential human nature, which ought not be tinkered with.  I further believe that effort means that we affirm that each human being has a real, immaterial soul that survives the body at death and that is not just “the software” or “the output” of the brain or DNA or our bodies in general.  I find it astounding that some Christians now deny that the Scripture requires this position, an argument made admirably by John Cooper in his book Body, Soul, and Life Everlasting.

In addition, I think that a philosophical defense of the soul is also critical, and in this I am attempting to digest and follow the arguments of Professor J.P. Moreland of Biola University.  He has developed those at book length in Body and Soul (co-authored with Scott Rae), in a related pamphlet What is the Soul?  Recovering Personhood in a Scientific Age, and, in shorter form, in a recent lecture for Biola’s Center for Christian Thought (click here for audio and for a subsequent lecture by John Cooper).

Why should we think the soul is a substantial, real entity, not just an epiphenomenon of brain function?  Space and my meager philosophical skills limit what I can say here, but I understand Prof. Moreland’s key points to be, at least in part:

1)      Mental events are not identical to physical events, even if the latter are intimately linked with the former.  For example, our experience of pain is different from the nerve firings that are the physiologic events that accompany a pain.

2)      Our basic self-awareness—in particular, our awareness that each of us owns our individual experiences, and each of us is an enduring self—cannot be adequately accounted for if we aren’t each individually a soulish, or spiritual/mental substance.

3)      A unified, first-person perspective can only be explained by the presence of a soul.  If we are primarily physical beings, then first-person statements can all be just as well described in the third person.

4)      If we can choose to act at all—that is, if we have free will to any meaningful extent—then there must be an agent acting.  This requires that there be an immaterial soul that is essentially who we are.  Otherwise, we have to hold that our actions are events determined entirely by physical causes, including other events, or that our actions are only seemingly “our” actions, since “we” can’t really cause anything other than what is caused by the biologic operations of our bodies (plus the movements of objects around us, at least in some cases).

5)      By an argument appealing to modal logic, if we can reasonably, or “strongly,” conceive of a state of affairs in which we exist disembodied, then we have good grounds for thinking we are not identical to our bodies, which cannot exist disembodied.  (This one for me is the most challenging of the five points listed here.)

The above are just bullet points indicating the sort of arguments Moreland advances in support of substance dualism, the view that the body (with the brain) and the mind/soul are two distinct things or substances.  Moreland identifies three forms of substance dualism: a Thomist form, to which he subscribes, that holds the human soul to contain all the ultimate capacities of a human being; a Cartesian form (advanced by Richard Swinburne, for example) that holds the soul to be more closely identified with mind in particular, and an “emergent” form (advanced by William Hasker) that holds the soul initially to emerge from brain and nervous system functioning but then to “exercise its own causal powers and be sustained by God after death” (Moreland, What is the Soul?, p. 29).

None of this considers the body, much less the deeds done in the body, to be diminished in its importance.  To be sure, the Scripture affirms bodily resurrection.  But I think that Christians in bioethics ought to be substance dualists and spend some time delving into the issues around it.  In an age when people seek to explain complex human decisions (like whom to mate with or whether to take a gun to a roomful of children) by appealing to DNA sequences or DNA expression or brain chemistry, we should remind ourselves and others that we are, most fundamentally, our souls—which need to be, and can be, fitted for heaven only by God Himself.

On the Boundaries of Moral Complicity

Last week’s lively exchange about the moral legacy of “the father of space medicine” invokes the broader issue of how to decide when one is being complicit in an immoral act.  (Please note that I am NOT attempting to weigh in further on the individual discussed last week—whose name I will not write here, in hopes that I can protect this post from further exchange about him personally.)

We all agree that what the Nazis did in the name of “human subject research” was evil.  At least, I think we all agree.  But would it be evil to use an anatomy text whose illustrations had been derived from the Nazis’ efforts?  Or, to be more contemporary about it, would it be unethical to take a new drug whose development included laboratory tests using stem cells from embryos specifically created or destroyed for use in those tests?   The Nazis are easy targets, but not a shield from more thorny issues that might strike closer to home.

Dr. Robert Orr addressed the issue of moral complicity at length in an article posted in 2003 on the website of the Center for Bioethics and Human Dignity.  In it, he posed several scenarios of moral complicity, and argued that they are not ethically equivalent.  To distinguish among them, he proposed five criteria:

1)      Timing—Association with a future immoral act is worse than association with one that is past.

2)      Proximity or remoteness—The more closely one is involved, the worse it is.

3)      Degree of certitude—how surely are the facts of the case known?  If not known, does one need to steer clear to avoid the possibility of appearing complicit?

4)      Degree of knowledge of the facts—Knowing them makes one more responsible than not knowing them (although I suppose we should be concerned about hiding behind a sort of “ignorance is bliss” argument).

5)      Intent—or, to be more exact, whether the intent of the person performing the immoral act and of a potentially complicit person are the same or different.

Dr. Orr explicitly rejected the possibility of “hand washing” in an attempt to absolve oneself from complicity (see: Pilate), and he counseled humility in judging the complicity of others.  Finally, he pointed out that hard and fast rules will be elusive, and that sensitivity to issues of the heart is paramount.

Read the whole thing.