Are AI Ethics Unique to AI?

A recent article in Forbes.com by Cansu Canca entitled “A New Model for AI Ethics in R&D” has me wondering whether the ethics needed for the field of Artificial Intelligence (AI) requires some new method or model of thinking about the bioethics related to that discipline. The author, a principal in the consulting company AI Ethics Lab, implies that there might be. She believes that the traditional “Ethics Oversight and Compliance Review Boards”, which emerged as a response to the biomedical scandals of World War II and continue in her view to emphasize a heavy-handed, top-down, authoritative control over ethical decisions in biomedical research, leave AI researchers effectively out-of-the-ethical-decision-making loop.

In support of her argument, she cites the recent working document of AI Ethics Guidelines by the European Commission’s High-Level Expert Group on Artificial Intelligence (AI HLEG). AI HLEG essentially distilled their AI ethical guidelines down to the familiar: Respect for Autonomy, Beneficence, Non-Maleficence, and Justice, as well as one new principle: Explicability. She downplays Explicability as simply the means to realize the other four principles. I think the demand for Explicability is interesting in its own right and will comment on that below.

Canca sees the AI HLEG guidelines as simply a rehash of the same principles of bioethics available to current bioethics review boards, which, in her view, are limited in that they provide no guidance for such a board when one principle conflicts with another. She is also frustrated that the ethical path researchers are permitted continues to be determined by an external governing board, implying that “researchers cannot be trusted and…focuses solely on blocking what the boards consider to be unethical.” She wants a more collaborative interaction between researchers and ethicists (and presumably a review board) and outlines how her company would go about achieving that end.

Faulting the “Principles of Biomedical Ethics” for failing to be determinant on how to resolve conflicts between the four principles is certainly not a problem unique to AI. In fact, Beauchamp and Childress repeatedly explicitly pointed out that the principles cannot be independently determinant on these types of inter-principle conflicts. This applies to every field in biomedical ethics.

Having an authoritative, separate ethical review board was indeed developed, at least in part, because at least some individual biomedical researchers in the past were untrustworthy. Some still are. We have no further to look than the recent Chinese researcher He Jiankui, who allegedly created and brought to term the first genetically edited twins. Even top-down, authoritative oversight failed here.

I do think Canca is correct in trying to educate both the researchers and their companies about bioethics in general and any specific bioethical issues involved in a particular research effort. Any effort to openly identify bioethical issues and frankly discuss potential bioethical conflicts at the outset should be encouraged.

Finally, the issue of Explicability related to AI has come up in this blog previously. Using the example of programming a driverless car, we want to know, explicitly, how the AI controlling that car is going to make decisions, particularly if it must decide how to steer the car in a no-win situation that will result in the death of either occupants inside the car or bystanders on the street. What we are really asking is: “What ethical parameters/decisions/guidelines were used by the programmers to decide who lives and who dies?” I imagine we want this spelled-out explicitly in AI because, by their nature, AI systems are so complex that the man on the Clapham omnibus (as well as the bioethicist sitting next to him) has no ability to determine these insights independently.

Come to think about it, Explicability should also be demanded in non-AI bioethical decision-making for much the same reason.

Then a Miracle Occurs…

If a picture is worth a thousand words, then a single-paneled comic is worth a thousand more. Sydney Harris is a famous cartoonist who has the gift of poking fun at science, causing scientists (and the rest of us) to take a second look at what they are doing. My favorite of his cartoons shows two curmudgeonly scientists at the chalkboard, the second scrutinizing the equations of the first. On the left side of the chalkboard is the starting equation demanding a solution. On the right is the elegant solution. In the middle, the first scientist has written: “Then a Miracle Occurs”. The second scientist then suggests to his colleague: “I think you should be more explicit here in step two” (the cartoon is obviously better).

Recently, in my usual scavenging around the internet for interesting articles on artificial intelligence (AI), I came across a Wired magazine article by Mark Harris describing a Silicon Valley robotics expert named Anthony Levandowski who is in the process of starting a church based on AI called Way of the Future. If their website is any indication, Way of the Future Church is still very much “in progress”. Still, the website does offer some information on what their worldview may look like in a section called Things we believe. They believe intelligence is “not rooted in biology” and that the “creation of ‘super intelligence’ is inevitable”. They believe that “just like animals have rights, our creation(s) (‘machines’ or whatever we call them) should have rights too when they show signs of intelligence (still to be defined of course).” And finally:

“We believe in science (the universe came into existence 13.7 billion years ago and if you can’t re-create/test something it doesn’t exist). There is no such thing as “supernatural” powers. Extraordinary claims require extraordinary evidence.”

This is all a lot to unpack – too much for this humble blog space. Here, we are interested in the impact such a religion may or may not have on bioethics. Since one’s worldview influences how one views bioethical dilemmas, how would a worldview that considered AI divine or worthy of worship deal with future challenges between humans and computers? There is a suggestion on their website that the Way of the Future Church views the future AI “entity” as potentially viewing some of humanity as “unfriendly” towards itself. Does this imply a future problem with equal distribution of justice? One commentator has pointed out “our digital deity may end up bringing about our doom rather than our salvation.” (The Matrix or Terminator, anyone?)

I have no doubt that AI will continue to improve to the point where computers (really, the software that controls them) will be able to do some very remarkable things. Computers are already assisting us in virtually all aspects of our daily lives, and we will undoubtedly continue to rely on computers more and more. Presently, all of this occurs because some very smart humans have written some very complex software that appears to behave, well, intelligently. But appearing intelligent or, ultimately, self-aware, is a far cry from actually being intelligent and, ultimately, self-aware. Just because the present trajectory and pace of computer design and programming continues to accelerate doesn’t guarantee that computers will ever reach Kurzweil’s Singularity Point or Way of the Future Church’s Divinity Point.

For now, since Way of the Future Church doesn’t believe in the supernatural, they will need to be more explicit in Step Two.

Self-Awareness, Personhood and Death

By Mark McQuain

Many philosophers argue that attaining the threshold of self-awareness is more valuable in determining a human’s right-to-life than simply being a living member of the human race. They require a human being attain self-awareness (reaching so-called full “personhood”) before granting unrestricted right-to-life for that particular human being. Lacking observable self-awareness relegates one to non-personhood status, and, though fully human, potentially restricted right-to-life status. The philosophic argument seems to be that only self-aware things suffer harm, or at least, do so to a more meaningfully significant degree than non-self-aware things.

Consider the following thought experiment. I finally designed a computer with sufficient complexity, memory, external sensors and computational power (or whatever) that, at some point subsequent to turning the power on, it becomes self-aware. The memory is volatile, meaning that the memory cannot hold its contents without power. The self-awareness, and any memory of that self-awareness, exists only so long as the power remains on. If subsequently powered off and then powered on again, the computer has no prior memory of being self-aware (because the memory is volatile and is completely erased and unrecoverable with loss of power) so becomes newly self-aware, with new external sensory input and new memory history. The longer the power remains on during any such power cycle, the more memory or history of its current self-awareness the computer accumulates. The computer’s hardware is bulletproof and is essentially unaffected by applying or disconnecting the power.

In this thought experiment, do the acts of turning the computer’s power on, allowing the computer to become self-aware, and then turning the power off harm anything?

By stipulation of the thought experiment, the computer’s hardware is unaffected by these events so no harm has occurred to the physical computer. Also, by stipulation, subsequently turning the computer’s power on again results in the computer becoming newly self-aware, with absolutely no memory of its previous period of self-awareness. The prior self-awareness is neither presently aware nor even in existence – it existed only during the prior power cycle. Perhaps as the designer, I may be harmed if I miss interacting with the computer as it was during its first self-awareness. The same perhaps goes for any other similar self-aware computer that had constant power during the experiment and witnessed the power cycling of the first computer.

But, what about the first computer? Was that computer harmed when I turned the power off? If so, what, exactly, was harmed? Following power-off, the computer has no self-awareness to be self-aware of any harm. The self-awareness no longer exists and that same self-awareness cannot exist in the future. Non-existent things cannot be harmed. Looking for some measure of group harm by assessing any harm experienced by other self-aware computers witnessing the event appears to be a problem of infinite regress (“It’s turtles all the way down”), as their self-awareness of the first computer’s self-awareness is also transient and becomes instantly non-existent when they power off. We will ignore the designer for the purpose of this experiment.

Assume now that the initial computer is a human brain. Some consider the physical brain a single-power-cycle, self-aware computer. For most humans, at some point after conception, we become self-aware, though philosophers disagree and cannot define the exact threshold for self-awareness. We can lose that self-awareness to physical brain injury or disease. Most believe the self-awareness certainly ceases with physical death, that is, it is volatile like the self-aware computer in my thought experiment, since, after death, there is no longer a functioning physical brain to sustain that self-awareness.

But if the thought experiment holds, requiring human beings the threshold of self-awareness before granting so-called personhood privileges such as unrestricted right-to-life is a meaningless threshold with regard to harm if that self-awareness is volatile and therefore not sustained in some manner after death. For self-awareness to be the determinant of harm in a living being, it must be non-volitile, meaning it sustains beyond death. However, if the self-awareness is sustained after death, then it is sustained in a non-physical manner (since the physical brain is obviously dead by definition of death). If self-awareness exists non-physically, might it also exist more fully than we can appreciate in a premature, a diseased, or an injured human brain prior to death?

Is Medical Artificial Intelligence Ethically Neutral?

Will Knight has written several articles over this past year in MIT’s flagship journal Technology Review that have discussed growing concerns in the field of Artificial Intelligence (AI) that may be of concern for bioethicists. The first concern is in the area of bias. In an article entitled “Forget Killer Robots – Bias is the Real AI Danger”, Knight provides real world examples of this hidden bias affecting people negatively. One example is an AI system called COMPASS, which is used by judges to determine the likelihood of reoffending by inmates who are up for parole. An independent review claims that algorithm may be biased against minorities. In a separate article, Knight identified additional examples in other AI algorithms that introduced gender or minority bias in software used to rank teachers, approve bank loans and interpret natural language processing. None of these examples argued that this bias was introduced intentionally or maliciously (though that certainly could happen).

This is where Knight’s the second concern becomes apparent. The problem may be that the algorithms are too complex for even their programmers to retroactively examine for bias. To understand the complexity issue, one must have an introductory idea of how the current AI programs work. Previously, computer programs had their algorithms “hard-wired” so to speak. The programs were essentially complex “if this, then do this” sequences. A programmer could look at the code and generally understand how the program would react to a given input. Beginning in the 1980’s, programmers started experimenting with code written to behave like a brain neuron might behave. The goal of the program was to model a human neuron, including the ability of the neuron to change its output behavior in real time. A neurobiologist would recognize the programming pattern as modeling the many layers of neurons in the human brain. A biofeedback expert would recognize the programming pattern as including feedback to change the input sensitivities based upon certain output goals – “teaching” the program to recognize a face or image in a larger picture is one such example. If you want to dive deep here, begin with this link.

This type of programming had limited use in the 1980s because the computers were too simple and could only model simple neurons and only a limited number at one time. Fast forward to the 21st century and 30 years of Moore’s Law of exponential growth in computing power and complexity, and suddenly, these neural networks are modeling multiple layers with millions of neurons. The programs are starting to be useful in analyzing complex big data and finding patterns (literally, a needle in a haystack) and this is becoming useful in many fields, including medical diagnosis and patient management. The problem is that even the programmers cannot simply look at these programs and explain how the programs came to their conclusions.

Why is this important to consider from a bioethics standpoint? Historically, arguments in bioethics could generally be categorized as consequentialist, deontological, virtue, hedonistic, divine command, etc… One’s stated position was open to debate and analysis, and the ethical worldview was apparent. A proprietary, cloud-based, black-box, big data neural network system making a medical decision obscures, perhaps unintentionally, the ethics behind the decision. The “WHY” of a medical decision is as important as the “HOW”. What goes in to a medical decision often includes ethical weighting that ought to be as transparent as possible. These issues are presently not easily examined in AI decisions. The bioethics community therefore needs to be vigilant as more medical decisions begin to rely on AI. We should welcome AI as another tool in helping us provide good healthcare. Given the above concerns regarding AI bias and complexity, we should not however simply accept AI decisions as ethically neutral.

AI and the Trolley Car Dilemma

I have always hated the Trolley Car dilemma. The god of that dilemma universe has decreed that either one person or five people will die as a result of an energetic trolley car and a track switch position that only you control. Leave the switch in place and five people are run over by the trolley. Pulling the switch veers the trolley onto an alternate track, successfully saving the original five people but causing the death of a different lone person on the alternate track. Your action or inaction in this horrific Rube Goldberg contraption contributes to the death of either one or five people. Most people I know feel some sort of angst at making their decision. MIT has a website that allows you to pull the switch, so to speak, on several different variations of this dilemma and see how you compare with others who have played this game, if you enjoy that sort of thing.

Kris Hammond, Professor of Computer Science and Journalism at Northwestern, believes a robot would handle the trolley car problem far better than a human since they can just “run the numbers and do the right thing”. Moreover, says Professor Hammond, though we “will need them to be able to explain themselves in all aspects of their reasoning and action…[,his] guess is that they will be able to explain themselves better than we do.” Later in the article he claims that it is the very lack of angst regarding the decision-making process that makes the robot superior, not to mention the fact that the robot, as in the case of self-driving cars, would avoid placing us in the dilemma in the first place by collectively being better drivers.

For the sake of today’s blog, I am willing to grant that second claim to focus on the first: Is there really lack of angst and, if so, does that lack contribute to making the robot’s decision right and therefore superior?

Currently, no robot has sufficient artificial intelligence that might allow for self-awareness sufficient to create angst. Essentially, a robot lacks independent agency and as such cannot be held morally accountable for any actions resulting from its programming. The robot’s programmer certainly does and can. Presumably (hopefully) the programmer would feel some angst, at least eventually, when he or she reviews the results of the robot’s behavior that resulted directly from his or her program. Is the displaced decision-making really advantageous? Is the calculus inherent in the encoded binary utilitarian logic really that simple?

Watson, IBM’s artificial intelligence system, can finally best some human chess grand masters. Chess is a rule-based game with a large but not infinite set of possible moves. Could a robot really be programmed to handle every single variation of the trolley car dilemma? Are the five individuals on the first track or the single individual on the second track pastors, thieves, or some weird combination of both, one of whom recently saved your life? Should any of that matter? Who gets to decide?

Trolley car dilemmas seem to demand utilitarian reasoning. Robots are arguably great at making fast binary decisions so if the utilitarian reasoning can be broken down into binary logic, a robot can make utilitarian decisions faster than humans, and certainly without experiencing human angst. Prof Hammond claims the robots will simply “run the numbers and do the right thing”. But the decisions are only right or superior if we say they are.

Utilitarian decision-making is great if everyone agrees on the utility assigned to every decision.  But this is clearly not the case, as the summary results on the MIT website clearly show.  Further, I think that most normal people have angst over their own decisions in situations like these, even inconsequential decisions offered on MIT’s harmless website.  So in the case of the robot, the angst doesn’t occur when the robot is actualizing its program – it occurred months or years ago when the programmer assigned values to his or her utilitarian decision matrix.

Were those the right values? (hint: there is angst here)
Who gets to decide? (hint: even more angst here)

CGI Turing Test

[Star Wars fans spoiler alert: The following contains potential story information from “Rogue One: A Star Wars Story”, the Star Wars Episode IV prequel]

I confess that I am a Stars Wars geek in particular and a science fiction movie buff in general. Like many, I am old enough to have seen the first Star Wars movie at its 1977 release, before it was re-indexed as “Episode IV: A New Hope”. The computer generated imagery or CGI special effects in that movie revolutionized the science fiction genre. It is now commonplace to use CGI to accomplish all manner of special effects, transporting moviegoers into all sorts of fantastic virtual worlds and virtual characters that appear, frankly, real. Rogue One has taken CGI up to the next level with one particular character such that I would argue that Rogue One has passed what I am calling the CGI Turing Test.

The original Turing test was described by Alan Turing, a famous British mathematician who designed and built a mechanical computer in the 1940s that successfully decoded the Nazi Enigma machine, a previous unbreakable encoding device that had thwarted Allied efforts to eavesdrop on the Nazi military communications. The Turing test is commonly misconstrued as a test of a computer’s (artificial) intelligence, which it is not. It is actually a test to determine whether a computer can imitate a human well enough to convince an actual human that it (the computer) is human. This test was a variant of a party game known as the “Imitation Game” in which a man (person A) and a woman (person B) would try to convince a third party, called the interrogator (person C) who was in a separate room, that each was the other. The Turing test substitutes a computer for person A.

Rogue One plays a similar game. There is a character in the Star Wars films named Grand Moff Tarkin, a very evil general in the Empire played by British actor Peter Cushing. Cushing debuted his Grand Moff Tarkin character in the original 1977 Star Wars movie. He is again seen reprising this role in the new 2016 Rogue One installment. I thought he was as awesome as ever. Except that he wasn’t. Peter Cushing died 22 years ago in 1994. I promise if you watch Rogue One and put yourself in the role of person C, the interrogator, you will be convinced that the CGI Peter Cushing (person A) is the real Peter Cushing (person B). So, the Academy Award® for Best Actor in a supporting role goes to…a computer at Industrial Light & Magic?

What has this to do with bioethics in general or artificial intelligence in particular? Perhaps not much. The futurist Ray Kurzweil argued in his book “The Singularity is near” that a machine will pass the Turing test in 2029 and perhaps this will come true, though his previous predictions have been called into question. In keeping with this AI/Turing Test theme, I gave the gift of “Google Home” and “Alexa” to different family members this Christmas. I was pleasantly amazed by the speech recognition of both systems and fully expect the technology to rapidly improve. Despite this, the forgoing discussion, and the knowledge that Turing and Kurzweil both disagree with me, I remain convinced that our ability to create a computer to imitate a human, the Imago Hominis, so to speak, will always fall far short of His ability to create a human to reflect Himself, the Imago Dei.

As the interrogator, what do you think?

I am – is it?

This past summer, researchers at RPI’s Cognitive Science Department programmed three Nao robots to see if they could pass a test of self-awareness. Modeled after the classic “Wisemen Puzzle”, the robots were asked whether or not they had been given a “dumbing pill” (in this case, a tap on their head, which muted their verbal output) or a placebo. The test not only required the robots to respond to a verbal question (“Which pill did you receive?”) but also recognize its own voice as distinct from the others and correctly respond (“I was able to prove that I was not given the dumbing pill”). For a $9500 retail robot, this is an impressive artificial intelligence (AI) test and worth watching HERE.

Dr. Selmer Bringsjord, lead investigator and chair of the Cognitive Science Department at RPI is careful to point out that these robots have been programmed to be self-conscious in a specific situation and describes his work as making progress in logical and mathematical correlates to self-consciousness. His biography page on the RPI faculty website provides a rather tongue-in-cheek assessment of the results of his research: “I figure the ultimate growth industry will be building smarter and smarter such machines on the one hand, and philosophizing about whether they are truly conscious and free on the other. Nice job security.”

I believe philosophizing about whether the robots are truly self-conscious to be the more interesting topic. In their current form, while the robot appears to a human observer to be self-aware, it is really the algorithm or program that correctly indicates (realizes?) that the robot did not receive the dumbing pill. But the algorithm itself is not aware that it correctly determined which pill the robot received. One could make the algorithm more complex, such that the algorithm tests whether the algorithm correctly determined which pill the robot received. But would that algorithm really be aware that the algorithm was aware which pill the robot received? One can see the infinite regression building. (Google: “It’s turtles all the way down”)

Perhaps the more interesting question is how we humans will react as the robot AI algorithms appear more self-aware, whether or not they actually are. Taking Dr. Bringsjord’s lead, should I invest in the domain name “spcr.org”* now or give it some more time?

 

* Society for the Prevention of Cruelty to Robots