In an open letter sent to the United Nations this week, robotics experts from around the world argue that we must ban lethal autonomous weapons systems (LAWS).
The letter, which was signed by 116 founders of robotics and AI companies, is a follow-up to a 2015 letter urging the UN to ban “killer robots.”
“As companies building the technologies in AI and Robotics that may be repurposed to develop autonomous weapons, we feel especially responsible in raising this alarm,” reads the letter.
Signees include Elon Musk of Tesla, Esben Ostergaard of Universal Robotics, and Mustafa Suleyman, head of Applied AI at Google’s DeepMind.
The letter endorses the UN’s recent decision to establish a Group of Governmental Experts (GGE) to discuss lethal autonomous weapons systems and urges the group to “work hard at finding means to prevent an arms race in these weapons, to protect civilians from their misuse, and to avoid the destabilizing effects of these technologies.”
The letter’s authors warn that lethal autonomous weapons are poised to become “the third revolution in warfare.” This technology, once developed, will permit conflict “fought at a scale greater than ever, and at timescales faster than humans can comprehend.”
“We do not have long to act,” warns the letter. “Once this Pandora’s box is opened, it will be hard to close.”
The GGE’s inaugural meeting was supposed to happen this week. It was rescheduled for November due to the fact that a few states have not yet paid their financial contributions to the UN.
Toby Walsh, a professor at the University of New South Wales and one of the letter’s organizers, worries what could happen if machines had the ability to make life and death decisions. “In the longer term, I am worried we will industrialize war, introducing machines that we cannot defend ourselves against, resulting in an arms race that will destabilize further an already delicate world.
Establishing an international ban similar to those we already have on biological weapons and blinding lasers may be the only way to limit the role of dangerous killer robots on the battlefield, says Walsh.
The primary obstacle to such a ban would be the difficulty of enforcing it, especially considering the vast usefulness autonomous systems offer in all sorts of other applications. On top of that is the argument that underground forces, like terrorists and drug cartels, will continue to work on this technology even if responsible parties adhere to the ban.
In the 2015 letter mentioned above, Elon Musk and thousands of other experts argued that AI technology “has reached a point where the deployment of such systems is…feasible within years, not decades, and the stakes are high. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc.”
“At present, I haven’t identified any particular aspect which both makes a system transition from a semi-autonomous to a fully autonomous lethal weapons system, and is auditable in a straightforward manner by a third party,” admits Ryan Gariepy, CTO of Clearpath Robotics.
According to Gariepy, one of the most important things that could come out of the initial GGE meeting is the understanding that the use of LAWS is just not the way conflicts should happen. Such an understanding could pressure governments not to use autonomous systems, even if a ban does not exist.