Does This Autonomous Nerf Gun Herald the Age of Killer Robots?
September 3, 2015
Well here’s something interesting that has arisen from HP’s “disastrous” $11 billion acquisition of Autonomy: check out this three-minute YouTube video: “See What You Can Create with HP IDOL OnDemand.” The fascinating footage reveals the product of developer Martin Zerbib’s “little project,” made possible with IDOL OnDemand and a Nerf gun. Watch as the system targets a specific individual, a greedy pizza grabber, a napping worker, and a thief. It seems like harmless fun, until you realize how gruesome this footage would be if this were a real gun.
It is my opinion that it is the wielders of weapons who should be held directly responsible for their misuse, not the inventors. Still, commenter “Dazed Confused” has a point when he rhetorically asks “What could possibly go wrong?” and links to an article in Bulletin of the Atomic Scientists, “Stopping Killer Robots and Other Future Threats.” That piece describes an agreement being hammered out that proposes to ban the development of fully autonomous weapons. Writer Seth Baum explains there is precedent for such an agreement: The Saint Petersburg Declaration of 1868 banned exploding bullets, and 105 countries have now ratified the 1995 Protocol on Blinding Laser Weapons. (Such laser weapons could inflict permanent blindness on soldiers, it is reasoned.) After conceding that auto-weaponry would have certain advantages, the article points out:
“But the potential downsides are significant. Militaries might kill more if no individual has to bear the emotional burden of strike decisions. Governments might wage more wars if the cost to their soldiers were lower. Oppressive tyrants could turn fully autonomous weapons on their own people when human soldiers refused to obey. And the machines could malfunction—as all machines sometimes do—killing friend and foe alike.
“Robots, moreover, could struggle to recognize unacceptable targets such as civilians and wounded combatants. The sort of advanced pattern recognition required to distinguish one person from another is relatively easy for humans, but difficult to program in a machine. Computers have outperformed humans in things like multiplication for a very long time, but despite great effort, their capacity for face and voice recognition remains crude. Technology would have to overcome this problem in order for robots to avoid killing the wrong people.”
Baum goes on to note that organizers base their call for a ban on existing international humanitarian law, which prohibits weapons that would strike civilians. Such reasoning has already been employed to achieve bans against landmines and cluster munitions, and is being leveraged in an attempt to ban nuclear weapons.
Will killer robots be banned before they’re a reality? It seems the agreement would have to move much faster than bureaucracy usually does; given the public example of Zerbib’s “little project,” I suspect it is already way too late for that.
Cynthia Murrell, September 3, 2015