Advocate for ethical robots

 

Congress has mandated that by 2015 more than half of the combat vehicles deployed by the United States be unmanned. Some of them will be smart enough to engage in combat without a human controller. These robots will be armed, sophisticated, and highly dangerous if something goes wrong.

In 2009, the United States Navy’s Office of Naval Research issued a report on the ethics and potential pitfalls of using fully autonomous robots in future combat situations. It expresses concerns that the deadlines established by Congress will cause corners to be cut, software vulnerabilities and safety flaws to be overlooked. Robots with the will and ability to kill but none of the conscience of a human will end up in the field.

The Office of Naval Research report recommends that robot manufacturers be required to incorporate a “warrior code” similar to that followed by America’s human servicemen and servicewomen. Any robot expected to engage in combat should follow the same code. We expect human soldiers to behave with honor and dignity; it is only natural and necessary to expect the same from the machines we send alongside them.

Luckily, a concise set of rules for robot behavior already exists. In 1942, the science fiction author Isaac Asimov created the Three Laws of Robotics, a rule set now considered a necessary inclusion in any autonomous machine by robotics experts around the world. With only a few slight modifications, this recursive set of rules could be applied to combat robots, limiting their behavior and protecting the innocent.

The Three Laws are:


1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.


Contact your representatives and tell them you strongly encourage them to take the Office of Naval Research’s recommendations. A sophisticated code of behavior is necessary if we want to trust our machines to make the kind of decisions that currently only a human can ethically make.