I have always hated the Trolley Car dilemma. The god of that dilemma universe has decreed that either one person or five people will die as a result of an energetic trolley car and a track switch position that only you control. Leave the switch in place and five people are run over by the trolley. Pulling the switch veers the trolley onto an alternate track, successfully saving the original five people but causing the death of a different lone person on the alternate track. Your action or inaction in this horrific Rube Goldberg contraption contributes to the death of either one or five people. Most people I know feel some sort of angst at making their decision. MIT has a website that allows you to pull the switch, so to speak, on several different variations of this dilemma and see how you compare with others who have played this game, if you enjoy that sort of thing.
Kris Hammond, Professor of Computer Science and Journalism at Northwestern, believes a robot would handle the trolley car problem far better than a human since they can just “run the numbers and do the right thing”. Moreover, says Professor Hammond, though we “will need them to be able to explain themselves in all aspects of their reasoning and action…[,his] guess is that they will be able to explain themselves better than we do.” Later in the article he claims that it is the very lack of angst regarding the decision-making process that makes the robot superior, not to mention the fact that the robot, as in the case of self-driving cars, would avoid placing us in the dilemma in the first place by collectively being better drivers.
For the sake of today’s blog, I am willing to grant that second claim to focus on the first: Is there really lack of angst and, if so, does that lack contribute to making the robot’s decision right and therefore superior?
Currently, no robot has sufficient artificial intelligence that might allow for self-awareness sufficient to create angst. Essentially, a robot lacks independent agency and as such cannot be held morally accountable for any actions resulting from its programming. The robot’s programmer certainly does and can. Presumably (hopefully) the programmer would feel some angst, at least eventually, when he or she reviews the results of the robot’s behavior that resulted directly from his or her program. Is the displaced decision-making really advantageous? Is the calculus inherent in the encoded binary utilitarian logic really that simple?
Watson, IBM’s artificial intelligence system, can finally best some human chess grand masters. Chess is a rule-based game with a large but not infinite set of possible moves. Could a robot really be programmed to handle every single variation of the trolley car dilemma? Are the five individuals on the first track or the single individual on the second track pastors, thieves, or some weird combination of both, one of whom recently saved your life? Should any of that matter? Who gets to decide?
Trolley car dilemmas seem to demand utilitarian reasoning. Robots are arguably great at making fast binary decisions so if the utilitarian reasoning can be broken down into binary logic, a robot can make utilitarian decisions faster than humans, and certainly without experiencing human angst. Prof Hammond claims the robots will simply “run the numbers and do the right thing”. But the decisions are only right or superior if we say they are.
Utilitarian decision-making is great if everyone agrees on the utility assigned to every decision. But this is clearly not the case, as the summary results on the MIT website clearly show. Further, I think that most normal people have angst over their own decisions in situations like these, even inconsequential decisions offered on MIT’s harmless website. So in the case of the robot, the angst doesn’t occur when the robot is actualizing its program – it occurred months or years ago when the programmer assigned values to his or her utilitarian decision matrix.
Were those the right values? (hint: there is angst here)
Who gets to decide? (hint: even more angst here)