There are a lot of talks now about the future of humanity in terms of interaction with artificial organisms, might they be swarms of nano-particles (Prey by Michael Crichton still gives me the creeps) or more complex robots, able to perform a sort of “conscious” behaviour. And we all remember Asimov’s Laws of Robotics, including the one that doesn’t allow robots to injure human beings. But what would be the case if, in order to protect a human, a robot needs to harm another (human)?
The simple experiment was to program a robot to prevent other robots – here proxies for human beings – from falling into a hole, and to observe the robot’s behaviour. A surprising one. When only one proxy was in danger to fall, the robot went to rescue. But when more than one proxy was present in the scenario, things became rapidly more confusing, because the robot was obliged to make choices. Sometimes successfully, sometimes less.
According to the team, in 14 out of 33 trials, the robot wasted too much time over the right decision, and as a result both proxies in danger fell into the hole. This raises the fundamental question if a robot can make ethical choices for itself. In the words of the lead scientist of this experiment, roboticist Alan Winfield, “my answer is: I have no idea.”
“When we’re talking about ethics, all of this is largely about robots that are developed to function in pretty prescribed spaces. If we can get them to function well in environments when we don’t know exactly all the circumstances they’ll encounter, that’s going to open up vast new applications for their use.” (Wendell Wallach, author of Moral Machines: Teaching robots right from wrong, to New Scientist, 14 September 2014)
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
(Isaac Asimov, “Runaround”, 1942)
However, these kind of questions will soon need an answer, and maybe a whole different protocol. An example is given by Professor Tom Sorell of the University of Warwick, UK, who has helped develop a new set of rules for 21st century robots, widely used in caring for the elderly, as well as in military and industrial applications. The six values (Autonomy; Independence; Enablement; Safety; Privacy; Social Connectedness) are intended to complement in a more positive way than Asimov’s principles and they are to be embodied in the programming and hardware of the care-robot.