Creepy robots are all in your head
Scientists have identified how the brain assesses robots – and possibly people – as potential social partners. Mark Bruer reports.
Researchers have mapped out the way our brain decides whether a robot is likeable or looks too human – and therefore repulsive.
The more robots look like humans, the more we like them. But only up to a point.
The effect is reversed once the resemblance becomes too close, when robots begin to strike observers as eerie, unsettling and unlikeable.
This is known as the “Uncanny Valley” phenomenon, first theorised by Japanese robotics professor Masahiro Mori in 1970. “Uncanny” describes the sense of strangeness, and “Valley” the drop in robots’ likeability as they become more humanoid.
The Uncanny Valley effect has often been demonstrated, but scientists from German and British universities have now shed light on how it is created in the brain of the observer.
The team, led by Astrid Rosenthal-von der Pütten from the University Duisburg-Essen, Germany, identified a network of brain regions that work together to determine if a robot is considered likeable and a worthy social partner.
They did this by assembling a range of images in five groups: mechanical robots; humanoid robots whose body shapes resemble humans; android robots which look even more human with recognisable facial features; artificially altered humans only slightly different to us but with flawless features; and humans.
These images were then shown to a group of 26 volunteers, who were asked to score the pictured individuals on their likeability, familiarity, and human-likeness.
Using magnetic resonance imaging, the team evaluated the participants’ brain activity in the prefrontal cortex and amygdala as they carried out their tasks.
As predicted by the Uncanny Valley phenomenon, participants preferred lifelike robots over mechanical ones, but disliked those that appeared "too human”, including the artificially altered humans.
Scans showed that several parts of the brain were involved in the evaluation of the images, each serving a unique role in the process.
While one brain section appeared to perform the task of simply identifying whether the images were human or not, others assessed the degree to which the images showed human characteristics.
But it was the ventromedial prefrontal cortex (VMPFC), a key part of the brain’s reward system located in the frontal lobe and involved in processing risk and fear, that appeared to be making the final call.
Activity in the VMPFC explicitly matched the Uncanny Valley responses of participants, increasing as the more human-like robots were shown and then dropping significantly in reaction to the most lifelike versions.
Participants were then asked which of the pictured characters they would trust to select a gift for them, indicating their potential social value. This experiment also showed that the Uncanny Valley phenomenon guided the participants’ choice of which robot they would entrust with a task.
Writing in the journal JNeurosi, Rosenthal-von der Pütten and colleagues say their work sheds new light on the functions of different parts of the medial prefrontal cortex, particularly the VMPFC, and also provides insight into how people respond to and assess artificial social partners.
The findings may also apply to the evaluation of human social partners.
“Understanding human responses to artificial agents is important, not only for optimising human-robot interaction, but it may also reveal previously unrecognised mechanisms governing human-human social interactions,” the authors write.