Need a hand with that?


If we’re to trust AI, it helps if it can tell us what it’s doing.


A robot reaches for a pill bottle during the trials.

Edmonds et al., Sci. Robot. 4, eaay4663 (2019)

By Ian Connellan

A new artificial intelligence (AI) system has been created that can both perform complex tasks and explain its behaviour in multiple ways in real time.

A new study published in the journal Science Robotics suggests this will encourage trust between AI and humans.

Lead author Mark Edmonds, from the University of California Los Angeles, realised that a hallmark of humans as social animals is the ability to provide comprehensive explanations of their behaviour.

Such explanations promote mutual understanding, which fosters trust between individuals and enables collaboration.

However, most AI and robots to date have lacked the ability to know what they’re learning and explain what they’re doing, making them less trustworthy to human minds.

To change this, Edmonds embarked on a study to find the best way to foster trust between humans and AI.

He and colleagues created a physical robot with AI integration. This robot was then given the task of opening pill bottles and explaining its method.

The robot/AI system not only showed the ability to learn from human demonstrators but also succeeded in opening new, unseen bottles.

The task required the robot to replicate both the hand pose and force used by a human. A total of 64 human demonstrations of opening three different medicine bottles served as the training data for the robot. The three bottle types had different locking mechanisms: one had no safety lock mechanism, one had push-twist locking, and one a pinch-twist locking.

After learning how to open the bottles, the AI was programmed to explain its actions in real time in three ways: symbolic explanations; haptic explanations outlining the action sequences; and both symbolic and haptic explanations.

These explanations were further compared to summary text-only explanations.

The study found that real-time visualisations of the robot’s internal decisions were more effective in promoting human trust than explanations based on summary text descriptions after the fact.

Interestingly, the study found that forms of explanation best suited to foster trust in humans aren’t necessarily best at passing on the method for performing the task.

The AI used in the study has the potential to build an intelligent tutoring system, and with further development, could also become an asset for critical decision-making in security systems.

Explore #robotics
  1. https://robotics.sciencemag.org/content/4/37/eaay4663
Latest Stories
MoreMore Articles