Asimov concocted the Three Laws of Robotics in 1942 and built many of his stories around these rules. While Asimov benefited greatly from those rules as an artistic concept, giving him ideas to explore, in the future our lives may really depend on robot ethics. Fortunately, a computer science professor at Georgia Tech named Ronald Arkin is already working on programming ethics into robots, specifically those for military use. Arkin has started working on what he calls an “ethics governor”, a software package that would be installed into military robots that would theoretically tell the machines when and what – maybe even who – to shoot.
Arkin argues that “not only can robots be programmed to behave more ethically on the battlefield, they may actually be able to respond better than human soldiers.” Now I know that if we do end up having autonomous armed robots, their decision-making must be equally good, if not better, than ours. Surely coming up with such a software is no mean feat, but I’m more worried about the possibility of evildoers coming up with an evil program, which is much easier to write (i.e. a program that instructs robots to kill everything they see).