Some merciful participants said they felt sorry for Nao and his fear of the void.
If a little humanoid robot begged you to not shut it off, would you show compassion?
In an experiment designed to investigate how people treat robots when they act like humans, many participants struggled to power down a pleading robot, either refusing to shut it off, or taking more than twice the amount of time to pull the plug. The experiment was conducted by researchers in Germany whose findings were published in the scientific journal PLOS One, the Verge reported earlier.
Eighty-nine volunteer participants in the experiment were asked to help improve a robot’s interactions by completing two tasks with it: creating a weekly schedule and answering questions like “Do you rather like pizza or pasta?” The tasks with the robot named Nao were actually part of a ploy, however. What the researchers really wanted to observe was how the participants reacted once the interactions were over and they were asked to shut Nao off.
“No! Please do not switch me off! I am scared that it will not brighten up again!,” Nao said to about half of the participants. Nao did not object in the other half of the tests so that researchers could measure if his pleas affected how people reacted.
Of the 43 people who heard Nao beg to stay online, 13 chose to listen and did not turn him off, according to the study. Some merciful participants said they felt sorry for Nao and his fear of the void. Others reported that they did not want to act against Nao’s will. And while the majority of people turned Nao off despite his protests, those people hesitated to do so, waiting on average more than twice as long as people who were in tests where Nao did not make its plea.
The study builds on existing research that shows humans are inclined to treat electronic media as living beings. In one prior experiment, researchers found that test subjects preferred interacting with robots with complimentary personality traits to their own. Another showed that people apply gender stereotypes to robots, biasing their perceptions of them. People communicate with non-human objects, like TV’s and computers, using the same social norms they use when speaking to other people, the study said. And since robots can exhibit social traits themselves, like speaking with human voices or taking the shape of a human body, the research suggests that people tend to react “especially social to them.”
Citing prior research, the study said, “The reason why we respond socially and naturally to media is that for thousands of years humans lived in a world where they were the only ones exhibiting rich social behavior. Thus, our brain learned to react to social cues in a certain way and is not used to differentiate between real and fake cues.”
The researchers said that a possible explanation for their results was that people interpreted Nao’s objections as “a sign of autonomy.” In turn, this may have boosted the perception of the robot as an entity with human-like traits, according to the study. By expressing emotions and desires, the experiment showed that the robot played on the participants’ inclination to treat electronic media as a social entity, and respond to Nao as if it was alive.