Veteran’s Day seems like a fitting day for this discussion! At the present, the US is one of only two nations in formal opposition to the proposed UN ban against lethal autonomous robots, within the Convention on Certain Conventional Weapons. If you’d like to learn more, I’ve copied some information from The Campaign to Stop Killer Robots. Here’s also a very informative recent article by Lisa A. Bergstrom in the Bulletin of Atomic Scientists. As robot makers, we should be fully informed about this issue.
The problem
Fully autonomous weapons would decide who lives and dies, without further human intervention, which crosses a moral threshold. As machines, they would lack the inherently human characteristics such as compassion that are necessary to make complex ethical choices.
The US, China, Israel, South Korea, Russia, and the UK are developing weapons systems with significant autonomy in the critical functions of selecting and attacking targets. If left unchecked the world could enter a destabilizing robotic arms race.
Replacing troops with machines could make the decision to go to war easier and shift the burden of conflict even further on to civilians. Fully autonomous weapons would make tragic mistakes with unanticipated consequences that could inflame tensions.
Fully autonomous weapons would lack the human judgment necessary to evaluate the proportionality of an attack, distinguish civilian from combatant, and abide by other core principles of the laws of war. History shows their use would not be limited to certain circumstances.
It’s unclear who, if anyone, could be held responsible for unlawful acts caused by a fully autonomous weapon: the programmer, manufacturer, commander, and machine itself. This accountability gap would make it is difficult to ensure justice, especially for victims.
Fully autonomous weapons could be used in other circumstances outside of armed conflict, such as in border control and policing. They could be used to suppress protest and prop-up regimes. Force intended as non-lethal could still cause many deaths.
The solution
The development, production and use of
fully autonomous weapons must be banned.
Retain meaningful human control over targeting and attack decisions by prohibiting development, production, and use of fully autonomous weapons. Legislate the ban through national laws and by international treaty.
All countries should articulate their views on the concerns raised by fully autonomous weapons and commit to create a new ban treaty to establish the principle of meaningful human control over the use of force.
All technology companies and organizations as well as individuals working to develop artificial intelligence and robotics should pledge to never contribute to the development of fully autonomous weapons.
Upcoming Events
DEC. 13 – DEC. 13 (SAN FRANCISCO)
Ethics in Tech: Comedy Roast https://www.eventbrite.com/e/big-tech-comedy-roast-by-ethics-in-technology-held-in-san-francisco-tickets-76858685397Ethics in Tech: Comedy Roast #2 https://www.eventbrite.com/e/big-tech-comedy-roast-by-ethics-in-technology-tickets-76855590139
DEC. 14 – DEC. 14 (SILICON VALLEY)NOV. 20 – NOV. 20 (BARCELONA)
Report launch with presentations by Mary Wareham, Pere Brunet, and Joaquín Rodríguez Álvarez: http://www.centredelas.org/ca/activitats/conferencies/4138-conferencia-l-arribada-dels-robots-assassins-amb-mary-wareham-i-presentacio-de-l-informe-noves-armes-contra-l-etica-i-les-persones