When you look at a robot, do you tend to think of it as having human attributes or as a mechanical algorithm?
Scientists say they may be able to predict your attitude via your brain activity, and that this in turn might help to understand the degree to which people accept artificial intelligence.
“We are living in an era when robots will start becoming more and more present in our environment,” says Agnieszka Wykowska, senior author of a paper published in the journal Science Robotics, “so it’s very important to understand what sort of attitudes people have towards robots”.
Previously, Wykowska’s group found that some people are more likely to ascribe intentionality to robots, while others describe them mechanistically.
They suggested, therefore, that it may be possible to “induce adoption of the intentional stance towards artificial agents”, which are becoming increasingly ingrained in our lives.
“Our constant exposure to digital devices, some of which are seemingly ‘smart’, makes the interaction with technology increasingly more smooth and dynamic, from generation to generation,” they write, noting that humanoid robots could soon enter our homes.
In the new study, they found that differences in brain activity of 52 volunteers correlated with later perception of a robot’s actions.
First, the researchers recorded participants’ electrical brain waves using electroencephalogram (EEG), instructing them to relax and let their thoughts wander freely.
Afterwards, participants were asked to choose descriptions of different visual scenarios involving a humanoid robot called iCub. These used “intentional / mentalistic” vocabulary, such as “iCub was trying to cheat”, or “mechanistic” vocabulary such as “iCub was unbalanced for a moment”.
People who perceived the robot’s behaviour as an unintentional product of its programming had higher beta wave activity in certain brain regions, which is associated with a greater tendency to make sense of oneself and others.
During the activity, participants who ascribed the robot with intentions had higher gamma activity in other brain regions. These brainwaves are associated with theory of mind, a skill associated with understanding the thoughts, feelings and emotions of others.
In a related commentary, Tom Ziemke, from Sweden’s Linköping University, suggests attributing intentionality to robots is unrealistic.
“Autonomous technologies, such as social robots and automated vehicles, are in many cases easy to interpret in terms of human-like intentionality and mental states,” he writes, “but there is a clearly a risk of overly anthropomorphic attributions.”
While attributing human-like intentions to animals has been well studied, he notes that its role in human-robot exchanges is less clear, and that Wykowska’s study could be an important step towards understanding it.
Ultimately, he notes this could inform better robot design to manage people’s expectations.