Aerospace engineer Dr Zena Assaad, works at ANU across projects that explore safety and trust in autonomous technology, human-machine teaming and regulation and assurance of emerging technology capabilities in safety-critical domains. Here she explores for Cosmos the connection between human trust in technology.
In August 2023, a suite of RoboTaxis was rolled out in San Francisco. While not the first instance of driverless vehicles on public roads, this rollout was still a feat of scale. Hundreds of robotaxis operated around the bustling city, allowing people to hail driverless cars like they would an Uber.
This rollout was supposed to be a leap forward for the autonomous vehicle industry. However, the more than 75 reported incidents that night proved otherwise. This included one vehicle failing to stop for a fire truck, another driving into wet cement on a construction site, and multiple vehicles losing connectivity due to a nearby music festival.
These incidents created a cacophony of negative sentiments around autonomous vehicles, diminishing the already fragile trust people held for these technologies. And the erosion of public trust in technology has far-reaching implications.
What is trust?
Trust is a nebulous concept that is caught between subjectivity and relational constructs. It is subjective because trust manifests differently for different people. We all have our own values and benchmarks we use to guide who or what we trust. And it is relational, because it depends on interactions and experiences. How much we do or don’t trust someone depends on reciprocal interactions and experiences between us.
When considered in the context of technology, the concept of trust becomes even more difficult to capture, because it is inherently one sided. The inanimate nature of machines means they do not hold the capacity to trust, therefore reserving this notion for humans to manage.
At its core, trust in machines is a reflection of our confidence in their ability to do what we expect them to do. Will my mapping tool get me to the right location? Will the ATM dispense the correct amount of money? Will the robotaxi get me from point A to point B without crashing into a fire truck or any other vehicle?
Our trust in machines stems partly from our own interactions and experiences with them, and partly from our understanding of their capabilities. What we believe a machine is capable of shapes how we choose to interact with that machine.
Advances in technology are often accompanied by embellished narratives of their capabilities. When these capabilities embody something akin to a science fiction story, such as a driverless car, our imaginations run rampant with all the possible ramifications of that embodiment.
This fuels an innate sense of distrust in technology, which is heightened when things go wrong with machines. In the case of the robotaxi rollout, the many incidents that accompanied that event increased the public’s distrust in autonomous vehicles. But why is trust so important?
Why do we need to trust machines?
Trust influences our behaviour and distinguishes decision making. We trust pilots, so we get on a plane flying at high altitudes across vast kilometres of open ocean. We trust nurses and doctors, so we allow them to administer our medicine or operate on us. We trust engineers so we spend our days in high-rise buildings working, shopping, eating or exercising. All of our decisions are underpinned by trust, whether we consciously realise it or not.
Without trust, you would be hard pressed to find people who would willingly agree to do most of these things. The caution and scepticism around autonomous vehicles is an example of how trust permeates people’s decision making. Greater levels of trust from the general public would likely have resulted in a sweeping uptake of these technologies, taking them from novelty options to mainstream capabilities. Instead, the multitude of incidents and accidents involving driverless cars have led to a deep sense of apprehension, with many people not wanting to “take the risk” with autonomous vehicles. This was evident in a recent survey carried out by Swinburne University of Technology which found 53% of Australians were not in favour of self-driving vehicles.
Trust is a transient notion, changing with time and experience. It takes some time to build it, minimal time to lose it and considerable effort to regain it once lost. Our varying experiences over time shape how much or how little we trust something.
Overwhelmingly positive experiences can lead to over-trust – trusting someone or something more than we should. Conversely, negative experiences can lead to under-trust – trusting someone or something far less than we should, or in some cases, not trusting at all. Each end of the spectrum comes with its own limitations, particularly in the context of technology adoption.
When we over-trust machines, we assume they are never wrong or that they never produce errors. In these circumstances, we are less likely to notice and rectify an issue when it arises. When we under-trust technology, we move to the opposite end of the spectrum where we assume machines will always be wrong. The issue here is we lose the opportunity of using the benefits technology can provide for us.
Achieving the Goldilocks ‘just right’ amount of trust is the ideal goal. But this level of trust is so tied up in people’s personal perceptions, values and experiences that it becomes slippery to achieve and maintain.
While there are claims autonomous vehicles may statistically lead to fewer road accidents, these claims are not enough of a catalyst for humans to embrace this technology. However, other applications of technology in more critical fields, such as healthcare, do have demonstrated benefits for society. And trust in these technologies is paramount to their uptake.
How does trust impact technology uptake?
Early disease diagnosis is considered an ongoing challenge globally. The use of artificial intelligence (AI) technologies has demonstrated a means of significantly improving early disease diagnosis. This is mainly due to the ability to analyse large data sets in rapid time frames, foregoing the lengthy administrative and process delays that come with healthcare systems and simultaneously reducing the likelihood of human error.
Because technology does not operate in a vacuum, successful adoption and effective implementation depend largely on public acceptance. AI disease diagnostic tools require human cooperation; doctors and nurses must use these tools with confidence and consideration. The issue is that the double-edged sword of trust challenges the dynamic between human and machine.
A calculator is just as much a machine as is an AI-enabled disease diagnosis tool or autonomous vehicle. So why do we not have issues trusting a calculator? Most people have confidence in a calculator’s answers, so why do so many people not hold confidence in other technologies? There are two reasons why.
The first is that trust is shaped by interactions, experiences and knowledge. Technology applications are advancing at such a fast pace that we have not yet formed these things as we have with other technologies. The embellished narratives around AI make this even harder because our understanding of their capabilities has been disillusioned through things such as fake news.
The second reason for a lack of trust in advanced technologies stems from a lack of understanding of our role in relation to machines. The increased sophistication of technology has promoted machines from tools to teammates. Machines have evolved beyond static button-pressing tools. The give and take we are now afforded with intelligent machines disturbs the hierarchical balance we once held, positioning our own role as supportive, rather than authoritative. This change can create a sense of unease, encouraging miscalibrated levels of trust in machines.
The common underpinning of these two reasons is that they are human-centred. Trust in technology lies entirely with humans. Therefore, any measures put in place to encourage trust must be human-centric.
How can trust in technology be built?
Calibrated levels of trust can be encouraged for humans using or working with machines. In fact, researchers around the world have proposed myriad measures for encouraging trust between humans and machines.
Some researchers focus on technical solutions, trying to embed trust in the weeds of code. As trust can be defined as confidence in a system to behave in the way we expect it to, researchers have suggested methods of increasing the reliability of systems behaving in the way they are intended by integrating guardrails in the lattice structures of code.
Other researchers focus on interventions at the human level, taking on the challenging task of attempting to address the nuances and subjectivity of trust. These interventions include training and education on how machines actually work, quelling the over-embellished understandings of these technologies. A more balanced and accurate understanding of technology tempers our expectations of them, and increases our confidence in what we know they are and are not capable of.
The technical solutions are more straightforward, presenting themselves in the 1’s and 0’s of code. The social solutions are more challenging to capture, as they exist outside the binary construct of code. This tension between technical and social considerations is an ongoing challenge. Balancing these two in an era of intelligent machines will require harmony between the people who design these systems and those who use them.
Towards the future
Technology is increasingly being integrated into a plethora of industries. It has now become a rare occurrence to not encounter digitisation of some form in our everyday lives. Whether we have consented to it or not, the digital revolution has cemented itself as the fourth industrial revolution, fundamentally shifting our world and ways of life.
Trusting machines has become a necessity in the digital age and is particularly important for the uptake of technology. However, the precarious seesaw of trust has created barriers to adoption of emerging technologies. In the case of autonomous vehicles, the multitude of incidents associated with these vehicles has eroded public trust in these technologies. While the absence of driverless taxis from our society is unlikely to affect our everyday lives, this cannot be said for all applications of technology. For example, the absence of smart phones would likely have a significant impact on many lives.
Technology has the potential to reshape our society, in both positive and negative ways. One of the things that distinguishes how technology permeates our society is how we choose to interact with it – how much we embrace it, how much we question it and how much we trust it.