Are AI Ethics Unique to AI?

A recent article in Forbes.com by Cansu Canca entitled “A New Model for AI Ethics in R&D” has me wondering whether the ethics needed for the field of Artificial Intelligence (AI) requires some new method or model of thinking about the bioethics related to that discipline. The author, a principal in the consulting company AI Ethics Lab, implies that there might be. She believes that the traditional “Ethics Oversight and Compliance Review Boards”, which emerged as a response to the biomedical scandals of World War II and continue in her view to emphasize a heavy-handed, top-down, authoritative control over ethical decisions in biomedical research, leave AI researchers effectively out-of-the-ethical-decision-making loop.

In support of her argument, she cites the recent working document of AI Ethics Guidelines by the European Commission’s High-Level Expert Group on Artificial Intelligence (AI HLEG). AI HLEG essentially distilled their AI ethical guidelines down to the familiar: Respect for Autonomy, Beneficence, Non-Maleficence, and Justice, as well as one new principle: Explicability. She downplays Explicability as simply the means to realize the other four principles. I think the demand for Explicability is interesting in its own right and will comment on that below.

Canca sees the AI HLEG guidelines as simply a rehash of the same principles of bioethics available to current bioethics review boards, which, in her view, are limited in that they provide no guidance for such a board when one principle conflicts with another. She is also frustrated that the ethical path researchers are permitted continues to be determined by an external governing board, implying that “researchers cannot be trusted and…focuses solely on blocking what the boards consider to be unethical.” She wants a more collaborative interaction between researchers and ethicists (and presumably a review board) and outlines how her company would go about achieving that end.

Faulting the “Principles of Biomedical Ethics” for failing to be determinant on how to resolve conflicts between the four principles is certainly not a problem unique to AI. In fact, Beauchamp and Childress repeatedly explicitly pointed out that the principles cannot be independently determinant on these types of inter-principle conflicts. This applies to every field in biomedical ethics.

Having an authoritative, separate ethical review board was indeed developed, at least in part, because at least some individual biomedical researchers in the past were untrustworthy. Some still are. We have no further to look than the recent Chinese researcher He Jiankui, who allegedly created and brought to term the first genetically edited twins. Even top-down, authoritative oversight failed here.

I do think Canca is correct in trying to educate both the researchers and their companies about bioethics in general and any specific bioethical issues involved in a particular research effort. Any effort to openly identify bioethical issues and frankly discuss potential bioethical conflicts at the outset should be encouraged.

Finally, the issue of Explicability related to AI has come up in this blog previously. Using the example of programming a driverless car, we want to know, explicitly, how the AI controlling that car is going to make decisions, particularly if it must decide how to steer the car in a no-win situation that will result in the death of either occupants inside the car or bystanders on the street. What we are really asking is: “What ethical parameters/decisions/guidelines were used by the programmers to decide who lives and who dies?” I imagine we want this spelled-out explicitly in AI because, by their nature, AI systems are so complex that the man on the Clapham omnibus (as well as the bioethicist sitting next to him) has no ability to determine these insights independently.

Come to think about it, Explicability should also be demanded in non-AI bioethical decision-making for much the same reason.

Then a Miracle Occurs…

If a picture is worth a thousand words, then a single-paneled comic is worth a thousand more. Sydney Harris is a famous cartoonist who has the gift of poking fun at science, causing scientists (and the rest of us) to take a second look at what they are doing. My favorite of his cartoons shows two curmudgeonly scientists at the chalkboard, the second scrutinizing the equations of the first. On the left side of the chalkboard is the starting equation demanding a solution. On the right is the elegant solution. In the middle, the first scientist has written: “Then a Miracle Occurs”. The second scientist then suggests to his colleague: “I think you should be more explicit here in step two” (the cartoon is obviously better).

Recently, in my usual scavenging around the internet for interesting articles on artificial intelligence (AI), I came across a Wired magazine article by Mark Harris describing a Silicon Valley robotics expert named Anthony Levandowski who is in the process of starting a church based on AI called Way of the Future. If their website is any indication, Way of the Future Church is still very much “in progress”. Still, the website does offer some information on what their worldview may look like in a section called Things we believe. They believe intelligence is “not rooted in biology” and that the “creation of ‘super intelligence’ is inevitable”. They believe that “just like animals have rights, our creation(s) (‘machines’ or whatever we call them) should have rights too when they show signs of intelligence (still to be defined of course).” And finally:

“We believe in science (the universe came into existence 13.7 billion years ago and if you can’t re-create/test something it doesn’t exist). There is no such thing as “supernatural” powers. Extraordinary claims require extraordinary evidence.”

This is all a lot to unpack – too much for this humble blog space. Here, we are interested in the impact such a religion may or may not have on bioethics. Since one’s worldview influences how one views bioethical dilemmas, how would a worldview that considered AI divine or worthy of worship deal with future challenges between humans and computers? There is a suggestion on their website that the Way of the Future Church views the future AI “entity” as potentially viewing some of humanity as “unfriendly” towards itself. Does this imply a future problem with equal distribution of justice? One commentator has pointed out “our digital deity may end up bringing about our doom rather than our salvation.” (The Matrix or Terminator, anyone?)

I have no doubt that AI will continue to improve to the point where computers (really, the software that controls them) will be able to do some very remarkable things. Computers are already assisting us in virtually all aspects of our daily lives, and we will undoubtedly continue to rely on computers more and more. Presently, all of this occurs because some very smart humans have written some very complex software that appears to behave, well, intelligently. But appearing intelligent or, ultimately, self-aware, is a far cry from actually being intelligent and, ultimately, self-aware. Just because the present trajectory and pace of computer design and programming continues to accelerate doesn’t guarantee that computers will ever reach Kurzweil’s Singularity Point or Way of the Future Church’s Divinity Point.

For now, since Way of the Future Church doesn’t believe in the supernatural, they will need to be more explicit in Step Two.

Oh, Those Darned Terms (part 2)

By Mark McQuain

Voltaire has been credited with saying: “If you wish to converse with me, define your terms”. In a previous blog entry, Tom Garigan reminded us that it is literally vital that we define our terms when we engage in ethical debates, particularly those ethical debates related to the beginning of life. Explicit definition of terms should apply for opinion pieces in the New England Journal of Medicine (NEJM) as well.

In a recent NEJM Perspective (subscription required), Cynthia Chuang, MD, and Carol Weisman, PhD, are concerned that the Trump administration’s November 15th publication of final rules (HERE and HERE), broadly allowing employers to deny contraceptive coverage to their employees on the basis of religious or moral objections, will “undermine women’s reproductive autonomy and could lead to an increase in rates of unintended pregnancies, unintended births, and abortions.” The article provides a summary of the political back and forth of court injunctions and rule modifications that have ensued, which is interesting but not the point of this blog entry. I want to focus on one of the four main objections they raise against allowing employers religious or moral exemptions from the current requirement that employers provide all FDA-approved contraceptive/birth-control methods.

There are 18 FDA-approved Birth Control methods for women provided by the Patient Protection and Affordable Care Act (commonly called Obamacare or ACA) without cost-sharing [that is, at no cost to the patient]. These are also referred to as contraceptives. A contraceptive is defined as a method that prevents pregnancy. Pregnancy has been defined as either beginning at conception (the union of an egg and sperm that results in a fertilized egg) or beginning at implantation of a fertilized egg into the lining of the uterus. This difference in definition impacts how one views certain contraceptive methods that may work in part by preventing a fertilized egg from implanting into the wall of the uterus. Any contraceptive method that prevents implantation causes the intentional death of that fertilized egg and would correctly be an abortifacient (a birth control method that causes an abortion) if pregnancy is defined as beginning with conception. An intrauterine device (IUD) and Levonorgestrel (PlanB) both work primarily by preventing the egg and sperm from joining to create a fertilized egg, but some argue that it can not be proven that these methods don’t also work, in part, by preventing implantation ((PlanB) (IUD).

This background is useful in discussing Chuang and Weisman’s third objection to allowing employers religious and moral objections against the full gamut of FDA-approved birth control methods currently allowed by the ACA:

“Third, the rules allow entities to deny coverage of contraceptives to which they have a religious or moral objection, including certain contraceptive services “which they consider to be abortifacients.” By definition, contraceptives prevent pregnancy and are not abortifacients. Allowing employers to determine which contraceptives they consider to be abortifacients, rather than relying on medical definitions and evidence, promotes the spread of misinformation.”

The previous link on IUD by the American College of Obstetrics and Gynecology (ACOG) relies on the definition of pregnancy that defines pregnancy as beginning with the implantation of a fertilized egg into the lining of the uterus. Neither an IUD nor Plan B are believed to terminate a pregnancy after implantation and therefore, under ACOG’s definition, the one relied upon by Chuang and Weisman, neither is an abortifacient. If pregnancy begins with conception, then both Plan B and the IUD are potential abortifacients, as both interfere with implantation of an otherwise viable fertilized egg. ACOG admits the IUD interferes with implantation in their position paper linked above.

Rather than rhetorically condemning employers who have genuine religious and moral concerns about participating in the termination of innocent life by implying they fail to rely on proper “medical definitions and evidence”, Chuang and Weisman (and ACOG for that matter) should do better job explaining their definitions so they can also avoid promoting “the spread of misinformation”.

Oh, those darned terms!

Justice Potter Stewart’s Infanticide Equivalent

By Mark McQuain

Regular readers of this blog will hopefully forgive me for repeating myself but given the recent failure of the “Born Alive Abortion Survivors Protection Act” (BAASPA) in the Senate, the repetition seems warranted.

My concern is not specifically the result of the failure of this particular bill. We indeed already have a “Born-Alive Infants Protection Act of 2002” (BAIPA), which passed by voice vote in the House and Unanimous Consent in the Senate, and accomplishes (as best as I can tell) essentially everything demanded in the BAASPA, including granting 14th Amendment personhood protection of such a baby under federal law. The arguable difference between the existing law, BAIPA, and the failed bill, BAASPA, is that the latter specified legal punishment if certain resuscitative measures were not performed.

Supporters of BAASPA argued that, despite BAIPA, examples continue to exist of babies who are otherwise normal and healthy at their stage of gestation that were born alive post abortion attempt and were subsequently allowed to die without attempts at resuscitation, effectively resulting in infanticide. Pro-choice advocates argued against the passage of BAASPA claiming the legal punishments within the bill would ultimately limit abortion providers from providing the full range of abortion services permitted under current law out of fear of legal recrimination. For the purpose of this particular blog entry, I will concede that both concerns are valid and simply state, given my pro-life position, that the moral weight of the first position infinitely outweighs the second. I want to focus the remainder of this blog on two public comments by prominent lawmakers regarding the status of any baby born post abortion.

The first comment was by Virginia Governor Ralph Northam and covered in my previously linked blog entry above. During a radio interview he described what would happen during a third trimester abortion if the woman went into labor: “The infant would be resuscitated if that’s what the mother and the family desired, and then a discussion would ensue between the physicians and mother…” The second comment was by U.S. Senate Minority Leader Chuck Schumer. He expressed concern that the BAASPA legislation would force doctors to provide care to a baby born alive post abortion attempt even if that care was “ineffective, contradictory to medical evidence, and against the families’ wishes.”

In both cases, what the family “desired” or “wished” prior to the abortion procedure was not a living baby. Current law permits a family with a “desire” or “wish” to terminate the life of a fetus to do so without any legal recrimination. Current BAIPA law grants all babies born alive the 14th Amendment protection of personhood, including life, liberty and the pursuit of happiness, regardless of the “desires’ or “wishes” of others. I believe it is a huge stretch to argue that these comments were meant to only apply to babies born so medically compromised that any attempt at further life-sustaining care would indeed be ineffective and/or contradictory to medical evidence – in short, futile.

I close again with Justice Potter Stewart’s infanticide equivalent from 1972 Roe v. Wade oral argument testimony between Justice Potter Stewart and attorney Sarah Weddington, who represented Roe (see LINK for transcript or audio of the second reargument Oct 11, 1972, approximately one-third of the way through):

Potter Stewart: Well, if it were established that an unborn fetus is a person within the protection of the Fourteenth Amendment, you would have almost an impossible case here, would you not?

Sarah R. Weddington: I would have a very difficult case. [Laughter]

Potter Stewart: You certainly would because you’d have the same kind of thing you’d have to say that this would be the equivalent to after the child was born.

Sarah R. Weddington That’s right.

Potter Stewart: If the mother thought that it bothered her health having the child around, she could have it killed. Isn’t that correct?

Sarah R. Weddington: That’s correct.

Informed Consent and Genetic Germline Engineering

By Mark McQuain

I recently read, with admittedly initial amusement, an article from The Daily Mail that described a young man of Indian decent who was intending to sue his parents for giving birth to him “without his consent.” Raphael Samuel, a 27 year-old who is originally from Mumbai, is part of a growing movement of “anti-natalists”, who claim it “is wrong to put an unwilling child through the ‘rigmarole’ of life for the pleasure of its parents.” While he claims he loves his parents and says they have a “great relationship”, he is bothered by the injustice of putting another person through the struggles of life “when they didn’t ask to exist.”

While I was amused at the absurdity of asking a non-existent entity for permission to do anything, I began to wonder whether my position against germline genetic engineering should continue to include the lack of informed consent by the progeny of the individuals whose germline we are editing.

I have made the claim on this blog previously that one of my arguments against germline genetic engineering is that it fails to obtain the permission of the future individuals directly affected by the genetic engineering. Ethical human experimentation always requires obtaining permission (informed consent) of the subject prior to the experiment. This goes beyond any legal issue as many would consider Autonomy the most important principle of Beauchamp and Childress’s “Principles of Biomedical Ethics”. Informed consent is obviously not possible for germline genetic engineering as the future subjects of the current experiment are presently non-existent at the time of the experiment. While I believe there are many other valid reasons not to experiment on the human genetic germline, should the lack of informed consent continue to be one of them?

In short, if I am amused at the absurdity of Mr. Samuel’s demand that parents first obtain their children’s permission to be conceived prior to their conception, is it not equally absurd to use the lack of informed consent by the progeny of individuals whose germline we are editing as an additional reason to argue against genetic germline engineering?

Abortion, at any time, for any reason?

By Mark McQuain

Last week, Virginia delegate Kathy Tran introduced a bill to eliminate some current restrictions on late term abortions in the Commonwealth. During the committee hearing on the bill, she answered a question by one of the other committee members to the effect that her bill would permit a third trimester abortion up to and including the point of birth. That exchange may be heard here. She later “walked back” that particular comment as outlined here. Virginia Governor Ralph Northam, who is a pediatric neurologist by training, added his comments to the discussion on a call-in WTOP radio show, where he implied that the bill would additionally permit parent(s) and physician(s) to terminate the life of a “severely deformed”, “non-viable” infant after the birth of the infant, which may be heard here (the entire 50+ minute WTPO interview may be heard here). That particular bill is currently tabled (the actual bill may be read here).

These events deserve far more reflection and discussion than can be afforded in the small space of this blog. I want to discuss two comments by Governor Northam and then comment on expanding abortion to include the extreme limit of birth.

First, during his radio interview, the Governor added qualifiers to the status of the infant that are not only not in the bill submitted by Delegate Tran, they are specifically contrary to it. Section 18.2-74(c) of the Code of Virginia is amended by Tran’s House Bill No. 2491 to read ([w]hen abortion or termination of pregnancy lawful after second trimester of pregnancy):

“Measures for life support for the product of such abortion or miscarriage must shall be available and utilized if there is any clearly visible evidence of viability. “(markup/emphasis in the bill)

To be generous to the Governor, it is unclear why he qualified his comments the way he did, given that the bill is explicitly discussing a potentially viable infant. Options include that the Governor was simply ignorant of the specifics of Tran’s Bill (possibly), was actually purposefully advocating for infanticide (unlikely), or wanted to defend the loosening of restrictions on very late term abortions, clearly intended by her bill, by introducing at least one conditional situation that a number of people might initially consider reasonable (most likely). The firestorm caused by his so-called “post-birth-abortion” comment completely obscured any attention to the equally tragic portion of Tran’s Bill that eliminates a huge portion of the Code of Virginia section 18.2-76, which currently requires a much more specific informed consent process, inclusive of a pre-abortion fetal ultrasound to attempt to educate the woman on the nature of the human being she is desiring to abort.

The second comment by Governor Northam was made parenthetically while expressing his opinion that the abortion decision should be kept between a physician and the pregnant woman, and out of the hands of the legislature, “who are mostly men”. Does this imply all men be excluded from the abortion discussion or just male legislators? Should male obstetricians likewise be excluded from this discussion? Following the Governor’s comment to its logical conclusion, shouldn’t he refrain from similar comments/opinions regarding abortion since he is also a man? This is absurd. Representative government specifically, and civil discourse more generally, is not possible if ideas cannot be debated unless the particular people involved in the debate are all the same sex, same race, same ethnicity, same height, same weight, same age, etc…

Aborting a healthy, viable baby just prior to, or, at the very moment of, birth seems to me to be the least likely example of the type of abortion that anyone on the pro-choice side of the abortion debate would use to make the case that abortion is a good and necessary right. Presently, immediately after birth, the baby (finally) has the protection as a person under the Fourteenth Amendment. Eerily, as I have shared in this blog before, almost identical concepts were discussed during the 1972 oral arguments of Roe v. Wade, such as the following exchange between Justice Potter Stewart and attorney Sarah Weddington, who represented Roe. (see LINK for transcript or audio of the second reargument Oct 11, 1972, approximately one-third of the way through):

Potter Stewart: Well, if it were established that an unborn fetus is a person within the protection of the Fourteenth Amendment, you would have almost an impossible case here, would you not?

Sarah R. Weddington: I would have a very difficult case. [Laughter]

Potter Stewart: You certainly would because you’d have the same kind of thing you’d have to say that this would be the equivalent to after the child was born.

Sarah R. Weddington That’s right.

Potter Stewart: If the mother thought that it bothered her health having the child around, she could have it killed. Isn’t that correct?

Sarah R. Weddington: That’s correct.

I am one blogger who is praying that Governor Northam’s “post-birth-abortion” misunderstanding of Delegate Kathy Tran’s Bill liberalizing abortions through the end of the third trimester never causes Justice Potter’s 1972 infanticide equivalent to become a reality.

Self-Awareness, Personhood and Death

By Mark McQuain

Many philosophers argue that attaining the threshold of self-awareness is more valuable in determining a human’s right-to-life than simply being a living member of the human race. They require a human being attain self-awareness (reaching so-called full “personhood”) before granting unrestricted right-to-life for that particular human being. Lacking observable self-awareness relegates one to non-personhood status, and, though fully human, potentially restricted right-to-life status. The philosophic argument seems to be that only self-aware things suffer harm, or at least, do so to a more meaningfully significant degree than non-self-aware things.

Consider the following thought experiment. I finally designed a computer with sufficient complexity, memory, external sensors and computational power (or whatever) that, at some point subsequent to turning the power on, it becomes self-aware. The memory is volatile, meaning that the memory cannot hold its contents without power. The self-awareness, and any memory of that self-awareness, exists only so long as the power remains on. If subsequently powered off and then powered on again, the computer has no prior memory of being self-aware (because the memory is volatile and is completely erased and unrecoverable with loss of power) so becomes newly self-aware, with new external sensory input and new memory history. The longer the power remains on during any such power cycle, the more memory or history of its current self-awareness the computer accumulates. The computer’s hardware is bulletproof and is essentially unaffected by applying or disconnecting the power.

In this thought experiment, do the acts of turning the computer’s power on, allowing the computer to become self-aware, and then turning the power off harm anything?

By stipulation of the thought experiment, the computer’s hardware is unaffected by these events so no harm has occurred to the physical computer. Also, by stipulation, subsequently turning the computer’s power on again results in the computer becoming newly self-aware, with absolutely no memory of its previous period of self-awareness. The prior self-awareness is neither presently aware nor even in existence – it existed only during the prior power cycle. Perhaps as the designer, I may be harmed if I miss interacting with the computer as it was during its first self-awareness. The same perhaps goes for any other similar self-aware computer that had constant power during the experiment and witnessed the power cycling of the first computer.

But, what about the first computer? Was that computer harmed when I turned the power off? If so, what, exactly, was harmed? Following power-off, the computer has no self-awareness to be self-aware of any harm. The self-awareness no longer exists and that same self-awareness cannot exist in the future. Non-existent things cannot be harmed. Looking for some measure of group harm by assessing any harm experienced by other self-aware computers witnessing the event appears to be a problem of infinite regress (“It’s turtles all the way down”), as their self-awareness of the first computer’s self-awareness is also transient and becomes instantly non-existent when they power off. We will ignore the designer for the purpose of this experiment.

Assume now that the initial computer is a human brain. Some consider the physical brain a single-power-cycle, self-aware computer. For most humans, at some point after conception, we become self-aware, though philosophers disagree and cannot define the exact threshold for self-awareness. We can lose that self-awareness to physical brain injury or disease. Most believe the self-awareness certainly ceases with physical death, that is, it is volatile like the self-aware computer in my thought experiment, since, after death, there is no longer a functioning physical brain to sustain that self-awareness.

But if the thought experiment holds, requiring human beings the threshold of self-awareness before granting so-called personhood privileges such as unrestricted right-to-life is a meaningless threshold with regard to harm if that self-awareness is volatile and therefore not sustained in some manner after death. For self-awareness to be the determinant of harm in a living being, it must be non-volitile, meaning it sustains beyond death. However, if the self-awareness is sustained after death, then it is sustained in a non-physical manner (since the physical brain is obviously dead by definition of death). If self-awareness exists non-physically, might it also exist more fully than we can appreciate in a premature, a diseased, or an injured human brain prior to death?

Cyborg Society

By Mark McQuain

A cybernetic organism, or cyborg, is an organism that is part human and part machine. My favorite TV show in the mid 1970’s was “Six Million Dollar Man”, the story of an injured test pilot who lost both of his legs, his right arm and his left eye. His doctors made him “better than he was” by replacing his injured limbs and eye with artificial parts that actually enhanced his functional ability. Technology in the 1970’s was completely inadequate to accomplish those tasks and even now still lags far behind that TV show.

Perhaps the closest that any single person has come to becoming a cyborg is Steven Mann, an electrical engineering professor at the University of Toronto who, beginning in the 1980’s, literally began attaching various computers and cameras to his body and wearing them regularly to the point where, he argued, the equipment became part of him and he felt somewhat “unplugged” if he wasn’t wearing his equipment. The early equipment was so bulky, that in retrospect, he looked frankly ridiculous. As computers advanced, it became more difficult to recognize the equipment. The following photo shows that progression.

Steven Mann

Now most of the rest of us do not imagine that we are anything like Professor Mann. But I think we are more like him than we realize. Consider this – how many of you have a sense of disconnected-ness if you can’t find your smartphone? I would argue that most of us feel “unplugged” when we are without our phones. That certainly seems to be the case with anyone younger than 30. Your calendar, to-do lists, contact information, credit cards, airline or movie tickets are all stored on your phone. In that sense, part of your identity is in your phone. My wife and I joke that our children would not regularly communicate with us absent the ability to text.

Issues of faulty child-rearing aside, my point is not just our dependence on technology, and not just the nearness and intimacy of that technology. We have become dependent upon other artificial tools and parts such as walkers, hearing aides, prosthetics, pacemakers and insulin pumps, which are not just intimate but, in some cases, actually vital. But none of those machines affects our thinking or changes how we interact with one another.

Consider two new exercise systems popular this Christmas – Peloton and the Mirror (Disclaimer – I am not encouraging another Christmas gift). Both use smartphone technology to augment the exercise experience, allowing an individual to access what appears to be unlimited options in coaches, resources and locations. Notice the ads. They seem to elegantly emphasize both virtual community and individual physical isolation. And, while this technology is not cybernetically attached to us (yet), it, like the smartphone technology upon which it is based, appears to be detaching us from one another.

From a bioethics standpoint, I wonder whether, in augmenting our reality via our cyborg progression, we aren’t also becoming isolated from that reality as we become more dependent on the very technology we use to connect with one another.

Will a cyborg society make us more or less connected within that society?

#HappyNewYear

After God

By Mark McQuain

In the December issue of The Journal of Medicine and Philosophy, editor Dr. Mark Cherry invited reviews of the late Professor H. Tristram Engelhardt, Jr.’s book After God: Morality & Bioethics in a Secular Age. Dr. Engelhardt passed away this past summer and was the co-founding editor of the Journal. The emphasis of this recent edition was to review the themes of After God and offer emphasis as well as counter arguments to these themes. The above link offers some free access to several articles though most require subscription or individual purchase.

I became familiar with Dr. Engelhardt’s theses on the weaknesses and limitations of secular bioethics during my coursework at Trinity by reading his book “Foundations of Bioethics” and hearing one of his guest lectures. One argument against a transcendental basis for morality or bioethics was that not everyone acknowledged a particular transcendental source. Wouldn’t pure logic and rational argument be a better method for grounding our bioethics? Couldn’t we simply develop a universal secular bioethics that everyone would rationally agree with? Engelhardt’s answer was simply – No. In Foundations of Bioethics he said: “The more a moral vision, moral understanding, thin theory of the good, account of right conduct, etc., has content, the more it presupposes particular moral premises, rules of evidence, rules of inference, etc. The more it gains content, the more it will appear parochial and partisan to one among numerous particular moral understandings. Universality is purchased at the price of content. Content is purchased at the price of universality.(p 67)” In other words, “to resolve moral controversies by sound rational argument, one must [already] share fundamental moral premises, rules of moral evidence, and rules of moral inference and/or of who is in moral authority to resolve moral controversies.(p 40)”

In After God, Dr. Cherry argues that Dr. Engelhardt carries his previous theses to their logical conclusions within our present culture “which shuns any transcendent point of orientation, such as an appeal to God or to a God’s eye perspective on reality.” Per Cherry:

“Without reference to God to guarantee that the virtuous are rewarded and the vicious suffer, there is no reason to believe that rationality requires one to be moral, much less why it would be prudent to act in accordance with morality. We are confronted with foundational concerns regarding sexuality, reproduction, suffering, and death, but without any particular guidance regarding how properly to engage and confront such challenges. Instead of content-full moral answers to guide bioethics and healthcare policy, we are left with a diverse set of lifestyles and death-styles among which to choose with no definitive reasons for preferring any particular choice of one over another. If the universe originated out of nothing, and is going nowhere, for no particular reason, then everything is ultimately absurd. Such, Engelhardt argues, are the epistemic and moral implications of a culture that seeks to be fully after God.”

It strikes me as somewhat ironic that this issue of the Journal comes out during the advent season, a time when Christians celebrate the incarnation of God on earth, necessarily asking us to consider how our present culture views its secular bioethics “after the death of God”.

The Genetic Singularity Point has Arrived

By Mark McQuain

November 2018 will go down as one of the most pivotal points in human history. Jon Holmlund covered the facts in his last blog entry. Regardless of what you think about the ethics of He Jiankui’s recent use of CRISPR to alter the human genomes of IVF embryos and his decision to intentionally bring those genetically altered twin girls to full term, one thing is perfectly clear – we humans are in charge now. Whether you believe in God or Nature as the Entity or Force that previously determined the arrangement of our genes, humans now sit at the adult table and will be gradually (rapidly?) making more of those genetic decisions. Like Kurzweil’s upcoming Singularity Point when computers develop sufficient artificial intelligence to design the next computer, humans have now reached the point where we can and are willing to design the next human.

The Genetic Singularity point has arrived.

While there are some scientists who are frustrated that our Institutional Review Boards and ethics committees have held us back this long, most of the rest of us are frankly stunned and uneasy that we have reached this point. But anyone who thinks our stunned uneasiness will prevent a repeat of this experiment or prevent a push to alter increasing portions of our human genome to change other genetic sequences will simply remain more frequently stunned and persistently uneasy, ethical arguments notwithstanding.

My reason for expecting this to be the case is I believe we will hear increasing demands of the form that now that we have the ability to change our genome, we have the responsibility to change our genome. In fact, it would not surprise me to see, in the not-to-distant future, insurance companies paying for the cost of IVF/CRISPR to modify your child’s genome to prevent disease/condition X to avoid paying for the later treatment of disease/condition X. Oh, you won’t be forced to do this. But, if you choose to rely on God or Nature for your baby’s genetic pattern, “we” won’t be responsible for his or her care. And, if big data can eventually be married to IVF/CRISPR to statistically improve one’s chances of having a smart/beautiful/athletic/successful baby, wouldn’t you want the same for your child? Since it will be our responsibility, how could a parent not choose to make their child the best that they could be?

This will be Gattaca writ large.

Being at the Genetic Singularity point, by definition, means we humans choose our next step. We have reached the point where we believe we are ready to select our future direction. It is up to us now to chart our own course. Our genetic trajectory is our responsibility. Our success or failure, or more broadly, our future good or bad, is finally ours to determine – really ours to assign.

So Man created mankind in his own image, in the image of Man he created them…And Man saw everything he had made, and behold, it was very good…