Expired News - United Nations tackles the 'fight' against killer robots; teaching them right from wrong may be the first step - The Weather Network
Your weather when it really mattersTM

Country

Please choose your default site

Americas

Asia - Pacific

Europe

News

United Nations tackles the 'fight' against killer robots; teaching them right from wrong may be the first step


Scott Sutherland
Meteorologist/Science Writer

Thursday, May 15, 2014, 12:13 PM -

For years, science fiction writers have relied upon our fascination with robots to spin tales of the mechanized downfall of humanity. Now, with the reality of armed drones, and military R&D projects working to produce robots that can think and make decisions for themselves, it seems the world's diplomats are stepping up to address this issue before we get ourselves in over our heads. 

Movies like The Terminator, The Matrix and even I, Robot have shown all-to-well what can happen to us if we carelessly (or some might even say recklessly) develop artificially-intelligent robots, especially when we arm them with lethal weapons and give them the power to decide when and where to end a life. In real life, there are already concerns over using military drones, which are remote-piloted by human operators, who still make the decisions about the use of lethal force, and with new robots being designed that could take on those decisions for themselves, there are people around the world who are understandably concerned. Not only does this involve concerns about the decisions themselves, but also who might be held accountable (morally or legally) if something were to go wrong - if the robot was to commit an act that would be considered a war crime or human rights violation.

The Campaign to Stop Killer Robots may seem like something out of one of these movies or other science fiction stories, but it's a very real movement with an equally real message: "Allowing life or death decisions to be made by machines crosses a fundamental moral line. Autonomous robots would lack human judgment and the ability to understand context. These qualities are necessary to make complex ethical choices on a dynamic battlefield, to distinguish adequately between soldiers and civilians, and to evaluate the proportionality of an attack. As a result, fully autonomous weapons would not meet the requirements of the laws of war."

The United Nations has been looking into these concerns already, but Tuesday was the start of a four-day-long meeting called the Convention on Certain Conventional Weapons (CCW) Meeting of Experts on Lethal Autonomous Weapons Systems, which will address the exact definition of a killer robot, current developments in the technology, and the ethics behind the concept of not only letting these robots fight our wars for us, but also the ethics of allowing them to make the decisions involved in fighting in these conflicts. The results of the meeting may even include a ban on these lethal autonomous robots.

Some experts believe that robots could be programmed with a set of ethics and morals, and that they would actually follow the rules of engagement far more consistently than human soldiers would. In a research paper from 2007, Georgia Institute of Technology professor Ronald Arkin wrote: "It is not my belief that an unmanned system will be able to be perfectly ethical in the battlefield, but I am convinced that they can perform more ethically than human soldiers are capable of."

However, not all agree Noel Sharkey, a professor of artificial intelligence and robots and professor of public engagement, at the University of Sheffield, UK, told Defense One "I do not think that they will end up with a moral or ethical robot. For that we need to have moral agency. For that we need to understand others and know what it means to suffer. The robot may be installed with some rules of ethics but it won’t really care. It will follow a human designer’s idea of ethics."

There are already projects in the works to give robots a set of rules for proper behaviour. University of British Columbia PhD student AJung Moon is part of the university's Open RoboEthics Initiative, which has programmed a robot to take humans and their needs into consideration in its decision making.

The rules and the situations of the test were fairly simple here, and nothing compared to what would be needed for a robot to properly decide between the life and death of a person, but it is a start. Check out her interview with Mashable.com to see her thoughts on teaching robots right from wrong.

As for this U.N. meeting, we'll have to wait on the results to know exactly how they plan on proceeding here, but it's at least good to see that researchers are taking the cautionary messages from science fiction writers to heart on this issue.

What are your thoughts? Can robots be properly programmed with ethics and morals? Should they be put to service in armed conflicts? Leave your answers in the comments section below.

Default saved
Close

Search Location

Close

Sign In

Please sign in to use this feature.