What is the uncanny valley? Suppose you win a sweepstakes and the prize is that you get to spend a day hanging out with a robot. You get to pick one of the following companions:
.
An industrial robot.
.
.
.
.
.
.
.
Aww!
.
.
.
.
.
.
.
OH GOD WE HAVE TO TAKE OFF AND NUKE IT FROM ORBIT
.
.
.
.
.
.
.
A mannequin that’s kind of creepy.
.
.
.
.
.
.
.
Are you sure that’s not just a photo of a woman?
.
.
.
.
.
.
The different reactions you just had to the different faces demonstrate the uncanny valley. We’re fine with completely human or completely nonhuman faces, but many people find faces that are somewhere in between creepy.
Though there’s some evidence that the uncanny valley is a real psychological phenomenon, so far it has been mostly speculative. In a recent paper published in the journal Cognition, a pair of scientists sought to reproduce the uncanny valley experimentally.
Theoretically, this is how we think people will react to faces depending on how human or nonhuman they look:
The researches collected eighty snapshots of real-life robots that people have actually built. (The five photos above are from their collection.) They asked research participants to rate each face for “mechanicalness” and “humanness” on a scale of 0 to 100. That way they could quantify how far to the left or right each robot belonged on the uncanny valley chart.
Then they recruited a fresh set of participants who wouldn’t be biased by having seen the faces before and asked the new set of people to rate each face on “friendliness.” This is what the researchers got:
This figure looks a lot like the uncanny valley chart we expected. There’s a peak in likability with the cute robots that look a little human, and another peak with the robots that look like regular humans. But note that the 100% industrial robots didn’t do too shabby.
How well people like or dislike a robot’s face is one thing, but does that affect what people actually do? This is a question engineers care about. If they want to build a robot that’s meant to socialize with people, they’d rather not have their human clients secretly trying to kill it. The researchers involved in this paper recruited another batch of participants to address this question.
They gave each participant some imaginary money. Participants got to decide how much money to give to one of the robots in the pictures. Here’s the important part: the researchers told the participants that the robot would decide how much money to give back. Participants who did especially well at maximizing their imaginary money in the game would win real money as a prize. So the participants had to make decisions about how much they trusted each robot with real money at stake.
These are the results:
The researchers found an uncanny-valley-like pattern again.
Based on this research, we have evidence that the uncanny valley really exists and that people make decisions with important consequences based on it. So if you’re a designer and you want to avoid get me a cross and some holy water moments, you should keep your robot out of the valley.
The article is open access, so you can read the whole thing here: http://www.sciencedirect.com/science/article/pii/S0010027715300640