Killer robots, or lethal autonomous weapon systems as they are often called, are weapons with autonomous capabilities that enable them to independently search, target and attack a person or an object. These weapon systems do not only raise numerous concerns relating to technology, morality, legality and policy-making, but touch upon larger concepts such as the future of modern warfare.
Machines cannot feel compassion, relate or regret; they neither have a judgement nor a moral conscience. If the decision to take a human life is left to a cold-hearted machine we risk creating wars even more inhumane and cruel than the ones we see today. Additionally, these highly technical inventions are vulnerable to a number of different malfunctions including system failures, cyber attacks and human errors making them especially dangerous.
Since 2013 the High Contracting Parties of the Convention on Certain Conventional Weapons (CCW) have taken an interest in the debate on killer robots. Originating from 1980, it is also known as the “Inhumane Weapons Convention” and aims to prevent the use of weapons that would cause unnecessary harm to combatants or civilians. Due to it’s flexibility many see the CCW as an appropriate forum for further discussions on the nature, effect and legality of killer robots. However, the challenges raised by the autonomous weapons systems require the international community to act in multiple forums. Killer robots also raises questions in regards to human rights protection, which was highlighted in a report on “lethal autonomous robotics” by the UN Special Rapporteur on extra-judicial, summary or arbitrary executions to the Human Rights Council.
This year’s CCW Meeting of High Contracting Parties took place in Geneva last week, on the 12 – 13 November 2015. Killer robots were a highly debated topic and many states shared their concerns and perspectives on the growing challenges. In addition, a number of side events during this week explored various related topics including SIPRI’s side event on the legal aspects, UNIDIR’s side event on economic drivers, maritime autonomous weapons and cyber weapons as well as the Campaign to Stop Killer Robots‘ side event on the role of civil society.
An interesting aspect growing from the debates on killer robots is the concept of meaningful human control. Many representatives at this year’s CCW meeting from civil society as well as the ICRC and a number of states, including Ireland, Mexico, Croatia and China noted the usefulness of having such a concept when discussing increasingly autonomous weapons. Other states such as Israel and the United States remained sceptical and suggested that the term was too vague and could be interpreted too narrowly. The United States argued that it would be more useful to talk about “human judgement” instead as a guideline.
The legal aspect of killer robots was especially focused on during the CCW meetings. One of the main legal concern raised was whether a machine would be able to comply with some of the core principles of international humanitarian law such as distinction and proportionality. Would a machine detect when a combatant is just about to surrender or tell the difference between an active combatant from an injured or sick one?
A second legal concern related to the implementation of the Article 36 reviews, found in Additional Protocol I of the Geneva Convention. It requires states to make assessments of new weapons or methods of warfare and it’s ability to comply with international humanitarian law, before developing or acquiring them. Some states, such as the United States and Russia, suggested that the law was sufficient in regulating autonomous weapons, while other states including China, Cuba, Pakistan and Zimbabwe, explicitly called for a new international instrument banning their development.
The best way of ensuring a world free from killer robots is to prevent them from ever being developed. Therefore we call on states to preemptively ban these inhumane, illegal, immoral and destabilising weapon systems.