Human Rights Groups Sound Alarm Over ‘Killer Robot’ Threat

This story was originally published on Aug. 30, 2018, and is brought to you today as part of our Best of ECT News series.

Leaders from Human Rights Watch and Harvard Law School’sInternational Human Rights Clinic last week issued a dire warning that nations around the world haven’t been doing enough to ban the development ofautonomous weapons — so-called “killer robots.”

The groups issued ajoint report that calls for a complete ban on these systems before such weapons begin to make their way to military arsenals and it becomes too late to act.

Other groups, including Amnesty International, joined in those urgent calls for a treaty to ban such weapons systems, in advance of this week’s meeting of the United Nations’ CCW Group of Governmental Experts on Lethal Autonomous Weapons Systems in Geneva.

This week’s gathering is the second such event. Last year’s meeting marked the first time delegates from around the world discussed the global ramifications of killer robot technologies.

“Killer robots are no longer the stuff of science fiction,” said RashaAbdul Rahim, Amnesty International’s advisor on artificialintelligence and human rights. “From artificially intelligent drones to automated guns that can choose their own targets, technological advances in weaponry are faroutpacing international law.”

Last year’s first meeting did result in many nations agreeing toban the development of weapons that could identify and fire on targetswithout meaningful human intervention. To date, 26 nations have calledfor an outright killer robot ban, including Austria, Braziland Egypt. China has called for a new CCW protocol that wouldprohibit the use of fully autonomous weapons systems.

However, the United States, France, Great Britain, Israel, South Koreaand Russia have registered opposition to creating any legally bindingprohibitions of such weapons, or the technologies behind them.

Public opinion is mixed, based on a Brookings Institution survey that was conducted last week.

Thirty percent of adult Americans supported the development of artificial intelligencetechnologies for use in warfare, it found, with 39 percent opposed and 32percent unsure.

However, support for the use of AI capabilities in weapons increased significantly if American adversaries were known to be developing the technology, the poll also found.

In that case, 45 percent of respondents in the survey said theywould support U.S. efforts to develop AI weapons, versus 25 who wereopposed and 30 percent who were unsure.

New Kind of WMD

The science of killing has been taken to a new technological level — and many are concerned about loss of human control.

“Autonomous weapons are another example of military technologyoutpacing the ability to regulate it,” said Mike Blades, researchdirector at Frost & Sullivan.

In the mid-19th century Richard Gatling developed the first successfulrapid fire weapon in his eponymous Gatling gun, a design that led tomodern machine guns. When it was used on the battlefields of the First WorldWar 100 years ago, military leaders were utterly unable to comprehendits killing potential. The result was horrific trenchwarfare. Tens of millions were killed over the course of the four-year conflict.

One irony is that Gatling said that he created his weapon as a way toreduce the size of armies, and in turn reduce the number of deathsfrom combat. However, he also thought such a weapon could show the futilityof warfare.

Autonomous weapons have a similar potential to reduce thenumber of soldiers in harm’s way — but as with the Gatling gun or theWorld War I era machine gun, new devices could increase the killingpotential of a handful of soldiers.

Modern military arsenals already can take out vast numbers of people.

“One thing to understand is that autonomy isn’t actually increasingability to destroy the enemy. We can already do that with plenty ofweapons,” Blades told TechNewsWorld.

“This is actually a way to destroy the enemy without putting ourpeople in harm’s way — but with that ability there are moralobligations,” he added. “This is a place where we haven’t really been,and have to tread carefully.”

Destructiveness Debate

There have been other technological weapons advances, from the poisongas that was used in the trenches of World War I a century ago to theatomic bomb that was developed during the Second World War. Each in turn became an issue for debate.

The potential horrors that autonomous weaponscould unleash now are receiving the same level of concern andattention.

“Autonomous weapons are the biggest threat since nuclear weapons, andperhaps even bigger,” warned Stuart Russell, professor of computerscience and Smith-Zadeh professor of engineering at the University ofCalifornia, Berkeley.

“Because they do not require individual human supervision, autonomousweapons are potentially scalable weapons of mass destruction. Essentially unlimited numbers can be launched by a small number of people,” he told TechNewsWorld.

“This is an inescapable logical consequence of autonomy,” Russelladded, “and as a result, we expect that autonomous weapons will reduce human security at the individual, local, national and international levels.”

A notable concern with small autonomous weapons is that their use could result in far less physical destruction than nuclear weapons or other WMDs might cause, which could make them almost “practical” in comparison.

Autonomous weapons “leave property intact and can be appliedselectively to eliminate only those who might threaten an occupyingforce,” Russell pointed out.

‘Cheap, Effective, Unattributable’

As with poison gas or technologically advanced weapons, autonomousweapons can be a force multiplier. The Gatling gun could outperform literally dozens of soldiers. In the case of autonomous weapons, one million potentially lethalunits could be carried in a single container truck or cargoaircraft. Yet these weapons systems might require only two orthree human operators rather than two or three million.

“Such weapons would be able to hunt for and eliminate humans in townsand cities, even inside buildings,” said Russell. “They would be cheap, effective,unattributable, and easily proliferated once the major powers initiatemass production and the weapons become available on the internationalarms market.”

This could give a small nation, rogue state or even a lone actor theability to do considerable harm. Development of these weaponscould even usher in a new arms race among powers of all sizes.

For this reason the cries to ban them before they are evendeveloped have been increasing in volume, especially as development of the coretechnologies — AI and machine learning — forcivilian purposes advances. They easily could be militarized to create weapons.

“Fully autonomous weapons should be discussed now, because due to therapid development of autonomous technology, they could soon become areality,” said Bonnie Docherty, senior researcher in the arms divisionat Human Rights Watch, and one of the authors of the recent paper thatcalled for a ban on killer robots.

“Once they enter military arsenals, they will likely proliferate and beused,” she told TechNewsWorld.

“If countries wait, the weapons will no longer be a matter for thefuture,” Docherty added.

Many scientists and other experts already have been heeding the call to banautonomous weapons, and thousands of AI experts this summer signed apledge not to assist with the development of thesystems for military purposes.

The pledge is similar to the ManhattanProject scientists’ calls not to use the first atomic bomb. Instead, many of the scientists who worked to develop the bombsuggested that the military merely provide a demonstration of its capabilityrather than use it on a civilian target.

The strong opposition to autonomous weapons today “shows that fullyautonomous weapons offend the public conscience, and that it is time totake action against them,” observed Docherty.

Pressing the Panic Button?

However, the calls by the various groups arguably could be amoot point.

Although the United States has not agreed tolimit the development of autonomous weapons, research efforts actually have been focused more on systems that utilize autonomy for purposes other than as combat weapons.

“DARPA (Defense Advanced Research Projects Agency) is currentlyinvestigating the role of autonomy in military systems such as UAVs,cyber systems, language processing units, flight control, and unmannedland vehicles, but not in combat or weapon systems,” said spokesperson Jared B.Adams.

“The Department of Defense issued directive 3000.09 in 2012, which wasre-certified last year, and it notes that humans must retain judgmentover the use of force even in autonomous and semi-autonomous systems,”he told TechNewsWorld.

“DARPA’s autonomous research portfolio is defensive in nature, lookingat ways to protect soldiers from adversarial unmanned systems, operateat machine speed, and/or limit exposure of our service men and womenfrom potential harm,” Adams explained.

“The danger of autonomous weapons is overstated,” suggested USN Captain (Ret.) Brad Martin, senior policy researcher for autonomoustechnology in maritime vehicles at the Rand Corporation.

“The capability of weapons to engage targets without humanintervention has existed for years,” he told TechNewsWorld.

Semi-autonomous systems, those that wouldn’t give full capability to amachine, also could have positive benefits. For example, autonomous systems could react far more quickly than human operators.

“Humans making decisions actually slows things down,” noted Martin, “so in manyweapons this is less a human rights issue and more a weaponstechnology issue.”

Automated Decision Making

Where the issue of killer robots becomes more complicated is insemi-autonomous systems — those that do have that human element.Such systems could enhance existing weapons platforms and alsocould help operators determine if it is right to “take the shot.”

“Many R&D programs are developing automated systems that can makethose decisions quickly,” said Frost & Sullivan’s Blades.

“AI could be used to identify something where a human analyst mightnot be able to work with the information given as quickly, and this iswhere we see the technology pointing right,” he told TechNewsWorld.

“At present there aren’t really efforts to get a fully automateddecision making system,” Blades added.

These semi-autonomous systems also could allow weapons to be deployedat a distance closer than a human operator could go. They could reduce the number of “friendly fire” incidents as well as collateral damage. Rather than being a system that might increase causalities, the weapons could become more surgical in nature.

“These could provide broader sensor coverage that can reduce thebattlefield ambiguity, and improved situational awareness at a chaoticmoment,” Rand’s Martin said.

“Our campaign does not seek to ban either semi-autonomous weapons orfully autonomous non-weaponized robots,” said Human Right Watch’sDocherty.

“We are concerned about fully autonomous weapons, not semi-autonomousones; fully autonomous weapons are the step beyond existing,remote-controlled armed drones,” she added.

Mitigation Strategy

It’s uncertain whether the development of autonomousweapons — even with UN support — could be stopped. It’s questionable whether it should be stopped entirely. As in the case of the atomic bomb, or the machine gun, orpoison gas before it, if even one nation possesses the technology, thenother nations will want to be sure they have the ability to respond inkind.

The autonomous arms race therefore could be inevitable. A comparisoncan be made to chemical and biological weapons. The BiologicalWeapons Convention — the first multilateral disarmament treatybanning the development, production and notably stockpiling of thisentire category of WMDs — first was introduced in 1972. Yet manynations still maintain vast supplies of chemical weapons. They actually were used in the Iran-Iraq War in the 1980s and more recently by ISISfighters, and by the Syrian government in its ongoing civil war.

Thus the development of autonomous weapons may not be stoppedentirely, but their actual use could be mitigated.

“The U.S. may want to be in the lead with at least the rules ofengagement where armed robots might be used,” suggested Blades.

“We may not be signing on to this agreement, but we are already behindthe limits of the spread of other advanced weapons,” he noted.

It is “naive to yield the use of something that is going to bedeveloped whether we like it or not, especially as this will end up inthe hands of those bad actors that may not have our ethical concerns,”said Martin.

During the Cold War, nuclear weapons meant mutually assureddestruction, but as history has shown, other weapons — including poison gasand other chemical weapons — most certainly were used, even recentlyin Iraq and Syria.

“If Hitler had the atomic bomb he would have found a way to deliver iton London,” Martin remarked. “That is as good an analogy to autonomousweapons as we can get.”

Peter Suciu has been an ECT News Network reporter since 2012. His areas of focus include cybersecurity, mobile phones, displays, streaming media, pay TV and autonomous vehicles. He has written and edited for numerous publications and websites, including Newsweek, Wired and FoxNews.com.Email Peter.

Leave a Comment

Please sign in to post or reply to a comment. New users create a free account.

Related Stories

Which most influences your decision to accept a LinkedIn invite from a stranger?
Loading ... Loading ...

LinuxInsider Channels