The neurology of the uncanny valley

The neurological reason most people feel uncomfortable and weird when interacting with a robot that looks almost, but not quite, human has been revealed.

Experiencing a sense of discomfort and mild revulsion when meeting a robot designed to look realistically human is a common phenomenon. Researchers call the space that falls between complete visual artificiality and total verisimilitude “the uncanny valley”.

It was first identified in 1970 by Masahiro Mori, a robotics researcher at the Tokyo Institute of Technology. He proposed that responses to a humanlike robot would shift from empathy to revulsion as the machine failed to meet innate expectations of how a real human looks and behaves.

Since then there has been a long-running and robust debate among robot makers and psychologists focussed on whether designing human-like machines is a good idea. If humanesque androids provoke negative feelings in the people who have to interact with them, runs one line of argument, why not make them all look like, well, anything but humans?

This line of reasoning was bolstered in 2016 when science statistician Maya Mathur from Stanford University and facial surgeon David Reichling of the University of California, both in the US, asked volunteers to rate 80 selected robot faces.

The researchers found that the uncanny valley was in full effect, not only in purely aesthetic terms, but also in psychological ones. Robot faces that triggered an uncanny reaction were also more likely to be categorised as untrustworthy.

“These findings suggest that while classic elements of human social psychology govern human–robot social interaction, robust UV effects pose a formidable android-specific problem,” they concluded.

Now, however, the neurological mechanism that underpins the problem appears to have been uncovered. Remarkably, it seems to be related to Parkinson’s disease.

Researchers at Osaka University in Japan, led by Takashi Ikeda, made the finding after exploiting a unique opportunity.

Earlier, roboticists at the university created a lifelike android called Geminoid F, with an external appearance modelled on a real person, who also lives in Osaka.

Ikeda’s team made short movies of both the robot and the human, filming as they moved and around and made facial expressions. As expected, the real live person walked and emoted in ways that came across (not unexpectedly) as wholly human, while Geminoid F appeared to be a bit try-hard and wonky.

The scientists then played the movies to volunteers, whose brain functions were monitored using functional magnetic resonance imaging (fMRI). {%recommended 6509%}

When looking at Geminoid F, a region of each volunteer’s brain called the subthalmic nucleus (STN) lit up like a Christmas tree. The STN is a small lens-shaped collection of neurons that is part of the brain’s basal ganglia system, and plays a role in motor control.

It is known that the area malfunctions in Parkinson’s disease patients. From as early as 1965 it has been the target for surgical interventions for the condition, notably a procedure called deep brain stimulation, which in some cases can ease symptoms of Parkinsonism such as rigidity.

The uncanny valley looms, it seems, because humanlike robots appear to be suffering from all-too-human movement disorders.

“Our data attest to commonalities between the movements of the android and Parkinson’s disease patients,” says co-author Masayuki Hirata.

“The android’s movements were rigid and akinesic in a comparable way to the movements of a patient with mild Parkinson’s disease.”

The research is published in the journal Scientific Reports.

Please login to favourite this article.