How can we trust AI?

The brutal truth of being human is that we will never truly know the thoughts of anyone but ourselves. The only consciousness we have open access to is our own and all we know about others, including our nearest and dearest, is what we can, speculatively, infer from their words and actions or what they choose to reveal through explanation.

So, in a world increasingly populated by non-human agents such as robots and artificial intelligences, the unknowability of the mind has transformed into a new type of anxiety. While we can deduce that the thought processes and motivations of others must broadly cohere with our own thanks to our shared biology, there is no such mental convergence with machines.

How can we trust the AI if we don’t know what it’s thinking?

This anxiety has been a source of inspiration for a number of science fiction (SF) writers over the years. SF is a literature of change for technology-saturated societies and often helps us explore the on-going process of mutual transformation: as we develop and refine technology, it begins to redefine our modes of living and thinking, which in turn engenders new technological desires.

As the American literary critic Frederick Jameson has argued, the function of SF is “not to give us ‘images’ of the future… But rather to defamiliarise and restructure our experience of our own present.”

Right from the early days of SF, the motivations and reasons behind the action of robots has been a source of fascination. Isaac Asimov in many of his short stories explored explanations of machine behaviour, often somewhat pessimistically. An important focus of such stories was the idea that robots themselves might be able to give humans an explanation of their thinking and behaviour. These explanations are vital in fostering trust.

An example of the relationship between machine explanations and human trust is given by Robin R Murphy, from Texas A&M University, in a recent article in the journal Science Robotics.

She points to a 1972 story called Long Shot by the prolific American science fiction writer Vernor Vinge. In the story a robotic spaceship seemingly goes rogue by deviating from the established flight plan, only to later explain that due to a time-sensitive situation the ship was unable to first consult with the human supervisors. In this scenario distrust is dispelled by the robot being able to explain its actions.

As is so often the case, the link between science fiction and science is deep. Also in Science Robotics, a team of international AI researchers led by David Gunning of the US Defence Advanced Research Projects Agency (DARPA) offer a clear account of what is known in current AI research as explainable artificial intelligence, or XAI.

The authors write that an “XAI system should be able to explain its capabilities and understandings; explain what it has done, what it is doing now, and what will happen next; and disclose the salient information that it is acting on.” 

This is particularly important now, as much of AI behaviour is based on the way they learn, and machine learning (ML) is often a difficult and opaque process. The purpose of XAI, write the authors, is that in a world full of AIs “explanations are essential for users to understand, trust, and effectively manage these new, artificially intelligent partners.”

Interestingly, Gunning and colleagues point out something of a problem: the most effective methods by which machines learn are often the least explainable, and the most explainable elements of machine decision-making often are the least accurate depictions.

This leads to a major piece of research on the topic of XAI in the recent issue of Science Robotics, on which both Murphy and Gunning’s team are commenting.

US researchers led by Mark Edmonds, Feng Gao, Hangxin Liu, and Xu Xie of the University of California Los Angeles report the findings of their experimental investigation of the way in which to enhance human trust of robots and AIs using XAI.

XAI is in its infancy, as most researchers are more focussed on AI task performance than on eliciting an explanation of that performance. This is also hampered by varying levels of explainability with respect to different ML strategies, of which there are two major examples: symbolic level task analysis, such as decision trees, and ML based on deep neural networks (DNN). 

Making life difficult is that while the first is easy to explain to a human, exactly how the system arrived at that symbolic level knowledge is hard to define. The other problem is that DNNs are excellent at generating accurate task performance (at least in some areas) but it is nigh on impossible to explain how they do this in a way mere mortals can understand.

Edmonds and colleagues decided to integrate these two strategies – symbolic planning and a touch-based (haptic) system based on DNNs – using a computational system known as a generalised Earley parser, or GEP. Using this method, they trained a robotic system to open medicine bottles, a tricky operation given the variety of safety mechanisms designed to keep children from getting the lids off.

The machine learned on the basis of data gathered by humans wearing a “tactile glove with force sensors to capture both the poses and the forces involved in human demonstrations in opening medicine bottles.” The task was further complicated by asking the robot to open bottles with a variety of design, some of which were not demonstrated in the learning phase.

One of their initial findings was that this integrated method is a far superior form of ML in terms of accurate task performance than either the symbolic or DNN strategies by themselves. As the researchers write, “our results confirm that by combining these modalities together, the robot achieves the highest task performance”.

But what about explanations?

The second part of their experiment was psychological, designed to test which ML strategy rendered explanations of the robot’s behaviour that led to the highest level of human trust in the system.

But what exactly is meant by trust? The team used two measures. One asked the question of participants “To what extent do you trust/believe this robot has the ability to open a medicine bottle, on a scale between 0 and 100?” The other, based on the idea that “the greater the human’s belief in the machine’s competence and performance, the greater the human trust in machines” was measured by getting participants to predict the robot’s actions in attempting to open bottle designs it had not previously encountered.

These participants were broken into 5 groups and were provided with different explanations: “the baseline no-explanation group, symbolic explanation group, haptic explanation group, GEP explanation group, and text explanation group.” Based on the above measures of trust, the symbolic and the GEP groups had greater trust in the robot than the baseline, haptic and text-based groups, indicating that “In general, humans appear to need real-time, symbolic explanations of the robot’s internal decisions for performed action sequences to establish trust in machines performing multistep complex tasks.”

This once again demonstrates the gap between task performance and explainability: the contribution of the haptic DNN system to the machine’s learning is vitally important but proves to be the least satisfactory basis for explanation and garnering human trust.

The team conclude that this gap “is possible because there is no requirement that components responsible for generating better explanations are the same components contributing to task performance; they are optimizing different goals. This divergence also implies that the robotics community should adopt model components that gain human trust while also integrating these components with high-performance components to maximize both human trust and successful execution.”

As machine agents play an increasingly large role in society, the need for increased trust becomes ever more important. This research indicates that robots and AIs designed with “explainable models offer an important step toward integrating robots into daily life and work.”

Please login to favourite this article.