Skip to main content

Killer Robots

January 19, 2018 | Expert Insights

More than 1,000 tech experts, scientists and researchers has warned about the dangers of autonomous weapons. Robotics leaders worry the creation of autonomous killer robots will usher in a new age of warfare–like gunpowder and nuclear weapons.

We must stop the threat now before it's too late. Others say it's impossible to impede the march of technology, this is just the world we live in now.

Background

The idea of robotic warfare has been a sci-fi staple for ages. Decades before Terminator invoked a hellish world pitting man against machine, the 1920s play which introduced us to the word “robot” predicted the end of humanity at the metallic hands of murderous bots.

Lately, however, the topic has become a much bigger issue as science fiction has become science reality. The likes of Elon Musk of Tesla and Mustafa Suleyman of Google have written to the United Nations urging a ban on the development and use of autonomous “killer robots” such as drones, tanks, and machine guns.

Ever since the debate on lethal autonomous weapon systems (LAWS) first began circa 2013, polarized opinions and doomsday prophesies have hindered a more nuanced analysis of the issue.

Four key arguments for the ban of killer robots. First, that the development of autonomous weapons will reduce combat fatalities for the aggressor, driving militaries to engage more frequently. Second, that these weapons will proliferate rapidly, ultimately falling into the hands of authoritarian regimes. Third, that in the past, the international community has successfully banned devastating weapons, such as biological ones. Finally, that they will kick- start an AI arms race. Fears about the rise of AI and the risk posed by machines have escalated in recent years as the technology's use in warfare develops.

The letter was signed by 116 robotics and artificial intelligence leaders from around the world, including Mr Musk and Google's DeepMind co-founder Mustafa Suleyman.

A similar letter warning of the dangers of autonomous weapons, signed by a slew of tech experts including Stephen Hawking, Apple co-founder Steve Wozniak and Mr Musk, was released in 2015.

Read more about AI: AI beats humans in reading

                                  Driverless cars to be used as 'lethal weapons'?

Analysis

Mr Musk has long been a vocal critic about the dangers of AI. The billionaire founder of Space X has described artificial intelligence as humanity's "biggest existential threat". He founded OpenAI, a non-profit company working for "safer artificial intelligence".

Lethal autonomous weapons in some shape or form are coming, drones for example. Militaries, rebels, and terrorists have taken to drones to push the boundaries of warfare. Groups who don't have access to military drones simply buy commercial drones and add bombs to them. The same thing might happen once robots become commonplace. People who want to turn them into weapons, absolutely will.  

Technology doesn't stop advancing just because we want it to. Experts say it's more important to create an ethical framework to figure out when we can and cannot use these new weapons. Nuclear weapons won't go away but the ethics of using those weapons will severely limit how people can deploy them. But by clearly defining how these weapons can and should be used, the world can keep a lid on out of control destruction.

Evan Ackerman at IEEE Spectrum argues killer robots might be better than humans at minimizing death and destruction. Unlike humans, robots don't get scared or panic. It would be theoretically possible for robots to be given stricter rules of engagement, which the robot would follow because it's a robot, and limit the scope of targets in a conflict area.

Experts say the world needs an international framework to ban the use of killer robots before it's too late. Once the systems are out in the wild, Pandora's box will be open forever. Weapons which can autonomously decide whether or not to kill will fundamentally change warfare. Science fiction has done a great job of imagining the horrors. What matters now is whether or not we'll let it become a reality.

Assessment

Our assessment is that governments across the world must decide on whether they should heed the warnings of technology leaders and experts or continue pushing the boundaries of what military combat could become in the future. We believe that killer robots can be a threat to humanity and any autonomous ‘kill functions’ should be banned.