Maybe Killer Robots Aren’t All That Bad
Given the choice, I think I’d like to be killed by a robot over a human being. There are no messy emotions involved, the act is cold, calculated and (presumably) over as quickly as possible, and, with the machine in question built to spec for human eradication, relatively painless. The alternative—getting killed by a person drilled through the proper techniques for ending the life of someone a lot like them—isn’t too appealing.
But just try telling that to the UN’s special rapporteur on extrajudicial, summary, or arbitrary executions, Christof Heyns. Worried about the advancement of “killer robots” in the warzone, Heyns addressed the UN Human Rights Council Thursday, recommending an international moratorium on “lethal autonomous robotics” that should be instituted “while the genie is still in the bottle,” technically speaking. That is to say, Heyns wants world governments to ban the development of and research into robots that can kill us without our express involvement.
Robots, the argument goes, don’t have any sense of morality or mortality, so to have them participate in a combat zone essentially amounts to “mechanical slaughter,” as if war has ever really been anything else at its core. As such, giving them the ability to decide to kill something or not can only result in even more human deaths than we see in war now. And that’s not even mentioning the whole “who do you prosecute for war crimes” argument. It could even potentially lead to our downfall, should you believe the pop culture hype.
Heyns, naturally, isn’t the only one concerned. Mark Bishop, chair of the Society for the Study of Artificial Intelligence and the Simulation of Behavior, has his reservations as well, saying that killer robots can’t “judge the need to engage, react to threats proportionately, or reliably discriminate between combatants and civilians,” as he told Slate. Ditto for the Campaign to Stop Killer Robots. At least some of the world is on edge about our apparent impending doom.
Thank the Terminators, HAL, and ED-209 for a better vision of what Heyns and others are worried about. Super-advanced, to be sure, but we’re not that far off from having similar combat systems in the near future—however “in the bottle” Heyns thinks the genie is.
True, there are currently no fully automated combat systems in use, but there are some that come close. Chief among them is our very own X-47B, Obama’s unmanned drones that have been garnering so much press lately, but that’s only semi-autonomous and requires a human to take mission-related actions. Then, of course, there’s the Phalanx Gatling Gun system (a.k.a. the CWIS), which, when mounted to Navy ships, serves as a great anti-missile defense system. Even France, not exactly known for its military prowess, has a semi-autonomous “nEURon drone.”
And expecting any world government to back off that avenue under threat of an “international moratorium” is laughable (didn’t we ban poison gas? And nukes? And genocide?). To discourage any future use of killer robots on the battlefield would require us removing the want or need to get to the battlefield in the first place, and that seems even further away than LARs. Sure, the U.S. did put out a moratorium back in November pledging to keep humans involved with autonomous robots for the next 10 years, but that’s little more than a bureaucratic finger in the dike of emerging technology.
Given world governments’ responses to the proliferation of autonomous combat systems, it looks like the dike is about to burst. In that sense, whoever agrees to an LAR ban before Heyns and the UN can guarantee the whole world is signed up is monstrously foolish in terms of national defense. But even then, the whole Terminator/RoboCop/2001: A Space Odyssey alarmist view of the entire LAR enterprise is a little over the top at best. Consider this the international scale of some U.S. politicians’ response to the 3-D printed guns—half-cocked and fear-monger-y.
That we’ll be living with robots designed with the express purpose of fully automated death in mind is unsettling, to be sure, but if we’re to follow roboticist Ronald Arkin’s interpretation, it isn’t entirely bad. As I mentioned above, robots lack the capacity for emotions and make entirely dispassionate judgments regarding situations—that means no malice, no revenge, lessened collateral damage, and the ability to quickly and accurately determine civilian from soldier. So long as the programming is meticulous and well thought out, the probability of a Skynet-stlye future seems to be close to zero.
Technology aside, the UN’s response (and the public’s reaction to their response) is fairly telling about where we are as a global society in terms of our morality. There are thousands of people up in arms about machines maybe killing humans in the pretty distant future, each one of them essentially responding to the advance in technology with “Well, I’d really be more comfortable if people continued to kill each other instead.”
So, maybe our future is doomed either way.