Killer robots
A robot is pictured in front of the Houses of Parliament and Westminster Abbey as part of the Campaign to Stop Killer Robots in London, April 23, 2013. Robots with the ability to attack targets without any human intervention must be banned before they are even developed for use in a battlefield, campaigners urged. Reutes

Two human rights groups are sounding the alarm on the dangers of deploying so-called killer robots in war, urging nations not to pursue the technology because it’s difficult to hold someone accountable when fully autonomous weapons choose whom to kill without human input, according to a report released Thursday. “The hurdles to accountability for the production and use of fully autonomous weapons under current law are monumental,” wrote the authors of “Mind the Gap: The Lack of Accountability for Killer Robots.”

Killer robots haven’t made their way to the battlefield just yet, but the Human Rights Campaign and Harvard Law School’s International Human Rights Clinic recommended both international agreements and national laws to stop the development, production and use of fully autonomous weapons. The nature of killer robots makes it nearly impossible for victims of the technology to pursue legal recourse, the authors said.

“The weapons themselves could not be held accountable for their conduct because they could not act with criminal intent, would fall outside the jurisdiction of international tribunals, and could not be punished,” they wrote. “Criminal liability would likely apply only in situations where humans specifically intended to use the robots to violate the law. In the United States at least, civil liability would be virtually impossible due to the immunity granted by law to the military and its contractors and the evidentiary obstacles to products liability suits.”

Killer robots themselves couldn’t be held legally responsible because they can’t have criminal content the way humans do, according to the report. “A fully autonomous weapon itself could not be found accountable for criminal acts that it might commit because it would lack intentionality,” the authors wrote.

Meanwhile, high legal standards make it an arduous task to successfully prosecute a commander of killer robots, according to the report. “The autonomous nature of killer robots would make them legally analogous to human soldiers in some ways, and thus it could trigger the doctrine of indirect responsibility, or command responsibility,” the report said. “A commander would nevertheless still escape liability in most cases. Command responsibility holds superiors accountable only if they knew or should have known of a subordinate’s criminal act and failed to prevent or punish it. These criteria set a high bar for accountability for the actions of a fully autonomous weapon.”

The report echoed the position of the Campaign to Stop Killer Robots, a group formed in 2013 by a number of nongovernmental organizations, including Human Rights Watch. "Autonomous robots would lack human judgment and the ability to understand context. These qualities are necessary to make complex ethical choices on a dynamic battlefield, to distinguish adequately between soldiers and civilians, and to evaluate the proportionality of an attack," the group says on its website. "As a result, fully autonomous weapons would not meet the requirements of the laws of war."