Select Page

Should the UN Ban Weapons with Artificial Intelligence?

Should the UN Ban Weapons with Artificial Intelligence?

“Concentrated power is not rendered harmless by the good intentions of those who create it.”  – Milton Friedman

The idea of lethal autonomous weapons AKA “killer robots” had been controversial even before the technology was developed. Now that we have that technology, United Nations officials will meet to discuss what to do about it.

“If we don’t get a ban in place, there will be an arms race,” writes Toby Walsh, one of the thousands of AI and robotics researchers that signed an open letter on the topic this summer. “And the end point of this race will look much like the dystopian future painted by Hollywood movies like The Terminator.” 

The letter seeks specifically to ban autonomous weapons that can “select and engage targets without human intervention.” Such AI technology is “feasible within years, not decades, and the stakes are high.”

The main argument for autonomous weapons is that they replace human soldiers, therefore reducing casualties – at least on one side of the fight. The primary arguments against autonomous weapons are 1) They would lower the threshold for battle and 2) Increase risk for civilians.

“A global arms race is virtually inevitable” unless we establish a ban, reads the letter. “It will only be a matter of time until they appear on the black market and in the hands of terrorists.” What would happen if killer robots fell into the hands of ISIS, for example?

“The world has decided collectively not to weaponize other technologies,” Walsh points out. “We have bans on biological and chemical weapons. Most recently, we have banned several technologies including blinding lasers and anti-personnel mines.”

The letter insists that the development and use of killer robots would tarnish the AI research industry and “there are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people.”

Walsh notes that a ban on killer robots would not stop research into similar technologies like autonomous vehicles. 

The NY Times reports that the Pentagon has already “put artificial intelligence at the center of its strategy to maintain the United States’ position as the world’s dominant military power.” Defense officials are currently testing “the kind of weaponry that until now has existed only in Hollywood movies and science fiction,” insisting that such weapons are needed to maintain an edge over Russia, China, and other potential rivals.

The Pentagon insists that autonomous weapons would augment the skills of humans, not replace them, but many fear that AI-powered weapons will go rogue and start to engage targets on their own. “There’s so much fear out there about killer robots and Skynet,” says Deputy Defense Secretary Robert Work. “That’s not the way we envision it at all.” 

This week, the Institute of Electrical and Electronics Engineers (IEEE) announced a project to develop ethical standards for the development of remote controlled systems. The organization’s initial report “warns that autonomous weapons would destabilize international security, lead to unintended military escalation and even war,” writes Walsh.

The idea for a ban on autonomous weapons seems to have broad support, and nine members of Congress have written letters to Sec. of State John Kerry and Sec. of Defense Ashton Carter expressing their support for a preemptive ban.

“All technology can be used for good or bad,” writes Walsh. “My fingers are crossed that the UN will take the first step on Friday.” 

Editor’s note: The possibility of someone usurping our weapons and turning them against us is real. Statistically, it will almost certainly happen at some point. But will anyone be able to stop their production? I doubt it.

About The Author

  1. Remember the title: “More woman victimization from the left.” The author, without a shred of evidence, presumes that there are…