The May 28, 2015 issue of Nature was largely devoted to artificial intelligence. Of note to readers of this blog was a short set of comments from four contributors regarding the ethics of AI. While none of the writers thought that “robots are (or will be) people, too,” so to speak, they will be increasingly complex machines that pose issue like:
- Creating a need for procedures and understandings to coexist with robots, who will “complement human beings, not supplant them.” Robots will likely always have “significant limitations” relative to humans, and “will need to learn when to ask for help.” Gee.
- Justice–Ensuring that the benefits of robotics and AI are generally available to people, not just to the rich elite. For example, sophisticated pattern recognition systems may become able to make medical diagnoses better than doctors can. But the processes will be complex, not understood by clinicians, and much less intuitive than, say, iterative Bayesian inferences. The resulting mysterious black box will make the outputs harder to communicate, and, indeed, to accept.
- Communication in forming policy—science fiction and AI enthusiasm greatly overstates what can be done, it is argued. The public does not understand the technical issues, and the research community is either uninterested in educating the public or despairs of the value of trying. Researchers should be more engaged.
- Lethal Autonomous Weapons Systems (LAWS)—a huge near-term concern. While many countries at least say they have sworn off developing such things, all that they would need is the combination of several currently-existing capabilities. It’s all too likely that a LAWS system would kill indiscriminately, and be (over)used in policing and not just warfare. LAWS could search and destroy but would not be able to follow the Geneva Convention. In the view of Stuart Russell of the University of California,
“The capabilities of autonomous weapons will be limited more by the laws of physics — for example, by constraints on range, speed and payload — than by any deficiencies in the AI systems that control them. For instance, as flying robots become smaller, their manoeuvrability increases and their ability to be targeted decreases. They have a shorter range, yet they must be large enough to carry a lethal payload — perhaps a one-gram shaped charge to puncture the human cranium. Despite the limits imposed by physics, one can expect platforms deployed in the millions, the agility and lethality of which will leave humans utterly defenceless. This is not a desirable future.”
This is perhaps not a bioethical target, but another example of technology embodying an idea and taking on a life of its own, outstripping well-meaning attempts to limit it. And while it doesn’t happen by “chance,” it sort of sneaks up on us. I passed along Steve Wozniak’s thoughts in this regard in my April 10, 2015 post, and I fretted about autonomous robot soldiers in my post of May 15, 2012.
I claim no new insights now—just reporting.