Whoa, Robot

This week’s many events include these three: I watched the movie I, Robot for the umpty-umpth time; the FAA announced it will make it easier for domestic agencies (e.g., the police) to fly unmanned drones in the U.S.; and the Wall Street Journal ran this article by Dr. Jonathan Moreno from the University of Pennsylvania: “Robot Soldiers Will Be a Reality—and a Threat.”
Dr. Moreno—well-known to readers of this blog, through Heather Zeiger’s recent series of posts—worries deeply about the confluence of neuroscience and military technology. Some conceivable applications of bleeding-edge neuroscience threaten individual autonomy and public freedom, and write the questions of justice in large letters. As skeptics of human enhancement point out, what is to prevent some future enhanced being, humanoid or not, from taking over from us people? The “dual use” problem—that technologies may be used for good or ill—poses, at the extreme, existential risks to the human race, risks so great that they render “risk/benefit” analysis useless. (Dr. Moreno also made this last point in 2010, before the Presidential Commission for the Study of Bioethical Issues, regarding synthetic biology.)
Brain-machine interface technology, that could help paralyzed people control artificial limbs with only their thoughts, could not only be used to dominate an enhanced human soldier’s mind (as Heather pointed out), but also to permit humans to oversee weapons systems that are increasingly autonomous. Even now, automated systems (without the human brain interface, to be sure) are on duty in the Korean DMZ. But more sinister still is the prospect that experiments in quantum computing will lead to robotic warriors based on “whole brain emulation,” capable of making attack decisions that are out of the control of humans. They could be used to attack innocent people, ignoring the laws of war and honor.
It calls to mind the dystopian logical conclusion of I, Robot’s three laws, and Dr. Lanning’s musings that “free radicals” of code could spontaneously at least mimic the actions of the soul (like intention and self-awareness). The deep questions of philosophy of mind aside (for now), Dr. Moreno recommends lots of urgent international negotiations, of treaties, rules, laws, verification systems, human overrides, and the like. He also argues that “fully autonomous offensive lethal weapons should never be permitted.”
He’s right. And yet, for starters, we have a much more “nearfetched” problem at our door: if we allow unmanned surveillance drones flying around our communities, even in the name of only watching for the bad guys, we accept the use of an instrument of war over our people. Let’s start by banning that. It’s tempting, when we look at the places where the police seem outgunned, but we should advocate a ban on domestic drones, as well. (What’s that, guns are also an instrument of war? Not like drones, they’re not—anybody hunt deer with a drone?)
To get around to it, my points: 1) It is time now to stand, on a lot of fronts, and say “we shall not”; 2) Support Dr. Moreno’s negotiations, but realize that no set of rules will buy complete security, that bad actors will persist, and that the situation is so complex that useful rules will be elusive (how’s that Dodd-Frank regulation writing going?); 3) Aggressively defend the so-called “medical boundary”—technology should be used to heal disease, not do “extra-human” things. It’s not a bright line, but a useful concept nonetheless; 4) Accept—and here’s where it gets really hard—that there is such a thing as a normative, or essential, human nature, that establishes boundaries that ought never be violated (probably a position inaccessible in any meaningful way to the atheist/naturalist); and 5) Consider moving the “ban boundary” into the laboratory. In this case, why should we be doing experiments in quantum computing? What legitimate, God-honoring purposes are being served? The scientists should be pressed, hard, to articulate goals.
In the meantime, I wonder if I should take cover….