Ethics of Artificial Intelligence

The May 28, 2015 issue of Nature was largely devoted to artificial intelligence.  Of note to readers of this blog was a short set of comments from four contributors regarding the ethics of AI.  While none of the writers thought that “robots are (or will be) people, too,” so to speak, they will be increasingly complex machines that pose issue like:

  • Creating a need for procedures and understandings to coexist with robots, who will “complement human beings, not supplant them.”  Robots will likely always have “significant limitations” relative to humans, and “will need to learn when to ask for help.”  Gee.
  • Justice–Ensuring that the benefits of robotics and AI are generally available to people, not just to the rich elite.  For example, sophisticated pattern recognition systems may become able to make medical diagnoses better than doctors can.  But the processes will be complex, not understood by clinicians, and much less intuitive than, say, iterative Bayesian inferences.  The resulting mysterious black box will make the outputs harder to communicate, and, indeed, to accept.
  • Communication in forming policy—science fiction and AI enthusiasm greatly overstates what can be done, it is argued.  The public does not understand the technical issues, and the research community is either uninterested in educating the public or despairs of the value of trying.  Researchers should be more engaged.
  • Lethal Autonomous Weapons Systems (LAWS)—a huge near-term concern.  While many countries at least say they have sworn off developing such things, all that they would need is the combination of several currently-existing capabilities.   It’s all too likely that a LAWS system would kill indiscriminately, and be (over)used in policing and not just warfare.  LAWS could search and destroy but would not be able to follow the Geneva Convention.  In the view of Stuart Russell of the University of California,

The capabilities of autonomous weapons will be limited more by the laws of physics — for example, by constraints on range, speed and payload — than by any deficiencies in the AI systems that control them. For instance, as flying robots become smaller, their manoeuvrabil­ity increases and their ability to be targeted decreases. They have a shorter range, yet they must be large enough to carry a lethal payload — perhaps a one-gram shaped charge to puncture the human cranium. Despite the limits imposed by physics, one can expect platforms deployed in the mil­lions, the agility and lethality of which will leave humans utterly defenceless. This is not a desirable future.”

This is perhaps not a bioethical target, but another example of technology embodying an idea and taking on a life of its own, outstripping well-meaning attempts to limit it.  And while it doesn’t happen by “chance,” it sort of sneaks up on us.  I passed along Steve Wozniak’s thoughts in this regard in my April 10, 2015 post, and I fretted about autonomous robot soldiers in my post of May 15, 2012. 

I claim no new insights now—just reporting.

0 0 vote
Article Rating
Subscribe
Notify of
3 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
sabbas
sabbas
5 years ago

Glenn McGee is an globally respected man, everyone can have his opinion about that great man but noone denies his vision and the difficulty of his goal.
Every visionary man takes huge blows as he tries to push things forward, especially a bioethics genius as McGee.

http://www.upenn.edu/gazette/0697/babies.html

Does anyone expect to be battled in a topic that blends philosophy, theology, history and medicine and not take immediate negative reaction by the standards?

Dr Lucus
3 years ago
Reply to  sabbas

Very interesting read.. But does not offer much in the way of Ethics and Artificial Intelligence…

Aliza
4 years ago

This is all well and good but what happens if something goes wrong, e could have a hug rebellion of super humans wanting to take over the world..