Published on February 5th, 2013 | by Andrew Porwitzky0
Sci-Fi Science: Like Tears in Rain
We live in an absolutely amazing time. For most of the 20th century creators speculated about the early 21st century, and how we live in it. We may not have Moon colonies, rampant genetic engineering, or robot slaves, but as always science is advancing around us. Sometimes the advances are scary. Sometimes terrifying. But science doesn’t turn back. It can’t.
Here I am, your friendly sidekick research scientist, starting a new column which will report on absolutely mind-blowing science. I’m not talking about worm holes, dark matter, or String Theory. I’m talking about things that will actually matter to you. Things you need to know about. Things that are amazing.
Today, let’s talk about robot murder.
In a recent study conducted at the Eindhoven University of Technology in The Netherlands and presented at the International Conference on Human-Robot Interaction in Washington DC, participants played a game with a humanoid robot. The robot is called iCat and was designed to study human-robot interactions. Test subjects cooperatively played Mastermind with the iCat for a few minutes, interacting with it and getting to know it. They were then impassionately told by the instructors to turn the dial that slowly powered down the robot. They were informed that doing so would wipe the robot’s memory and erase its personality forever.
That’s when the robot began begging for its life.
The purpose of the study was to characterize the difference in response when people were asked to kill the robot if it was “agreeable” or “non-agreeable” when they cooperated on playing the game. Basically for one participant the robot was polite and nice, and for another it was a dick. Another variable was that for some people the robot was smart (making good move suggestions) and for others it was stupid (making intentionally bad suggestions). Surprisingly, the agreeableness didn’t factor much, but the intelligence did. Below is the delay in seconds for participants to kill the iCat under the four conditions.
Of course this study has a lot to say about not only human-robot interactions, but humans themselves. The authors of the study sum it up beautifully.
“The more intelligent a being is the more rights we tend to grant to it. While we do not bother much about the rights of bacteria, we do have laws for animals. We even differentiate within various animals. We seem to treat dogs and cats better than ants. The main question in this study is if the same behavior occurs towards robots. Are humans more hesitant to switch off a robot that displays intelligent behavior compared to a robot that does show less intelligent behavior?”
Not only scientists, but lawyers have been considering the full impact of human-robot interactions in society for decades. A 2005 paper “Toward A Method for Determining the Legal Status of a Conscious Machine” and a 1981 paper “Frankenstein Unbound: Toward a Legal Definition of Artificial Intelligence” show just how much concern there has been for these issues. If a robot were created that was truly intelligent, as much as any living human, and it was “terminated” then what would that mean legally?
How do we judge if something is alive? You love you dog and treat it as a member of your family, but what about the birds on your bird feeder? About a week ago my wife found a dead bird that had gotten trapped in our garage and died. She was upset, but if that had been a cat she would have been inconsolable. Then again, I know people that have essentially no empathy with animals.
“… four had to be excluded due to irregularities in the experiment procedure. One participant, for example, switched the robot off before he actually received the instructions.”
There will undoubtedly be those that will always view intelligent robots as disposable, regardless of how intelligent they may become. This was true for humans with dark skin not so long ago. What rights should a machine have? What rights should all men have?
In 2005, “service robots” outnumbered “industrial robots” for the first time globally. Service robots are things like robotic vacuum cleaners, lawn mowers, and pets. More and more service robots will become an everyday part of our society for all the ways they can enrich our lives, just as computers do. Their intelligence, or simulation of intelligence, will continue to grow. If you have a robotic cat that has all the intelligence and behavioral response of a real cat, you can potentially become as attached to it as you would a read cat. What happens if your neighbor maliciously kills it? Is that simply “destruction of property”?
“This does confirm … that the social rule of Manus Manum Lavet (One hand washes the other) does not only apply to computers, but also to robots. However, our results do not only confirm this rule, but also it suggests that intelligent robots are perceived to be more alive. The Manus Manum Lavet rule can only apply if the switching off is perceived as having negative consequences for the robot. Switching off a robot can only be considered a negative event if the robot is to some degree alive.”
Robots will continue to become more intelligent. More and more we will see robots in day to day life in an ever growing capacity. The place these robots will have in our society ultimately is dictated by how we respond to them. I don’t know if there is a right or wrong way to respond to an intelligent robot. Studies like this are giving us a glimpse of the future by showing us that although we intellectually know that a machine may be just a machine, something is happening emotionally when we interact with intelligent robots that is causing us to hesitate when asked to unplug them.
And that “something” has the potential to reform our society, not just for robots, but for all of us. Because if we can be empathic with robots, then why not with people?
Reference: original research paper