Are AI Ethics Unique to AI?

By Mark McQuain

A recent article in by Cansu Canca entitled “A New Model for AI Ethics in R&D” has me wondering whether the ethics needed for the field of Artificial Intelligence (AI) requires some new method or model of thinking about the bioethics related to that discipline. The author, a principal in the consulting company AI Ethics Lab, implies that there might be. She believes that the traditional “Ethics Oversight and Compliance Review Boards”, which emerged as a response to the biomedical scandals of World War II and continue in her view to emphasize a heavy-handed, top-down, authoritative control over ethical decisions in biomedical research, leave AI researchers effectively out-of-the-ethical-decision-making loop.

In support of her argument, she cites the recent working document of AI Ethics Guidelines by the European Commission’s High-Level Expert Group on Artificial Intelligence (AI HLEG). AI HLEG essentially distilled their AI ethical guidelines down to the familiar: Respect for Autonomy, Beneficence, Non-Maleficence, and Justice, as well as one new principle: Explicability. She downplays Explicability as simply the means to realize the other four principles. I think the demand for Explicability is interesting in its own right and will comment on that below.

Canca sees the AI HLEG guidelines as simply a rehash of the same principles of bioethics available to current bioethics review boards, which, in her view, are limited in that they provide no guidance for such a board when one principle conflicts with another. She is also frustrated that the ethical path researchers are permitted continues to be determined by an external governing board, implying that “researchers cannot be trusted and…focuses solely on blocking what the boards consider to be unethical.” She wants a more collaborative interaction between researchers and ethicists (and presumably a review board) and outlines how her company would go about achieving that end.

Faulting the “Principles of Biomedical Ethics” for failing to be determinant on how to resolve conflicts between the four principles is certainly not a problem unique to AI. In fact, Beauchamp and Childress repeatedly explicitly pointed out that the principles cannot be independently determinant on these types of inter-principle conflicts. This applies to every field in biomedical ethics.

Having an authoritative, separate ethical review board was indeed developed, at least in part, because at least some individual biomedical researchers in the past were untrustworthy. Some still are. We have no further to look than the recent Chinese researcher He Jiankui, who allegedly created and brought to term the first genetically edited twins. Even top-down, authoritative oversight failed here.

I do think Canca is correct in trying to educate both the researchers and their companies about bioethics in general and any specific bioethical issues involved in a particular research effort. Any effort to openly identify bioethical issues and frankly discuss potential bioethical conflicts at the outset should be encouraged.

Finally, the issue of Explicability related to AI has come up in this blog previously. Using the example of programming a driverless car, we want to know, explicitly, how the AI controlling that car is going to make decisions, particularly if it must decide how to steer the car in a no-win situation that will result in the death of either occupants inside the car or bystanders on the street. What we are really asking is: “What ethical parameters/decisions/guidelines were used by the programmers to decide who lives and who dies?” I imagine we want this spelled-out explicitly in AI because, by their nature, AI systems are so complex that the man on the Clapham omnibus (as well as the bioethicist sitting next to him) has no ability to determine these insights independently.

Come to think about it, Explicability should also be demanded in non-AI bioethical decision-making for much the same reason.

Leave a Reply

Please Login to comment
Notify of