Killer robot threat must be faced, say experts

Fears of killer robots doing the bidding of terrorists, despots and hackers have prompted leaders from the AI and robotics industries to call on the United Nations to ban the use of lethal autonomous weapons.

{%recommended 1757%}

In an open letter published today 116 AI heavyweights from 26 countries, including Elon Musk (of Tesla, SpaceX and OpenAI fame) and Mustafa Suleyman (of Google’s DeepMind), warn of an incipient “third revolution in warfare” and ask the UN to “find a way to protect us all from these dangers”.

Lethal autonomous weapons are weapons that can operate without human supervision and possess some degree of “decision-making” ability, including the ability to choose their targets. By turning the astounding technical developments in machine learning and robotics made in recent years to the task of killing, they threaten to “permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend”, the letter says.

A similar open letter was signed by thousands of AI and robotics researchers in 2015. In December 2016, 123 countries at the UN came to a unanimous decision to begin a round of formal discussions of this kind of weaponry. 19 countries have already called for an outright ban. 

{%recommended 1240%}

Toby Walsh, a professor of AI at the University of New South Wales who was one of the organisers of the letter, says we stand at a crossroads in the development of AI. While it can be used to help us tackle social problems such as inequality, climate change and economic crisis, it could also be turned to the industrialisation of war.

“We need to make decisions today choosing which of these futures we want,” says Walsh.

The letter was released at the International Joint Conference on Artificial Intellgence 2017 in Melbourne, Australia, the world’s biggest AI conference.

Please login to favourite this article.