How self-driving cars could make moral decisions

Human responses to ethical dilemmas can be modelled into algorithms to guide machines, according new research. Stephen Fleischfresser considers the implications.

Illustration of a man and woman in a self-driving car.
Who’s in the driver’s seat? Autonomous machines like self-driving cars may be able to deal with dilemmas the same way humans do.
Jon Berkeley / Getty

In an age of relativism it is accepted wisdom that human morality is too complex to be accurately modelled. Or so we thought.

Researchers from the Institute of Cognitive Science at the University of Osnabrück in Germany have demonstrated that, at least in some circumstances, algorithms that imitate our moral behaviour can be formulated.

Self-driving cars, which some predict will dominate our roads within a few decades, will inevitably have to make life-and-death decisions: which way to swerve in a traffic accident, for instance, or whose life to prioritise when things go wrong. Do we want them to follow hard and fast rules, or make fuzzier and more human decisions?

A new paper published in Frontiers in Behavioral Neuroscience uses immersive virtual reality simulations of road traffic scenarios to explore how humans make such moral decisions.

In the virtual reality scenario, participants drove in a typical suburban environment and faced a dilemma: the car they were driving was going to hit one of two obstacles – a choice between variety of humans, animals and inanimate objects – and the participants had to choose which.

The authors, Leon Sütfeld, Richard Gast, Peter König and Gordon Pipa, reporting being able to make sense of the behaviour as observed in the laboratory.

“Human behavior in dilemma situations can be modeled by a rather simple value-of-life-based model that is attributed by the participant to every human, animal or inanimate object,” Sütfeld explains. This model could be formulated into an algorithm to enable autonomous vehicles to mimic the kinds of judgements we make in such situations.

Mimic is the important word here. As Sütfeld argues, categorical moral rules “can stop making sense at some point”.

For example, imagine a rule whereby a car is programmed to value human life above all else. Now imagine that car given a choice between the certainty of killing a dog and a 5% risk of mild injury to a human. “In that case,” Sütfeld says, “it would appear rather cruel to run over the dog, but that’s what the categorical rule would tell us to do.”

A value-of-life model can take probabilities like this into account. The authors argue that our machines should behave the way that humans are observed to behave in the same context, rather than using algorithms based on such categorical rules. This, says Sütfeld, “can do a good enough job as to be applicable for self-driving cars”.

But what about the future of moral machines? “Now that we know how to implement human ethical decisions into machines we, as a society, are still left with a double dilemma,” König says. “Firstly, we have to decide whether moral values should be included in guidelines for machine behavior and secondly, if they are, should machines act just like humans.”

These questions take on extra importance when we consider that the implications of the research extends well beyond self-driving cars. Will such moral decision-making algorithms end up in armed autonomous systems for policing and military purposes? Only time will tell. For now, let’s hope they can make our roads safer and more humane.

Stephen fleischfresser.jpg?ixlib=rails 2.1
Stephen Fleischfresser is a lecturer at the University of Melbourne's Trinity College and holds a PhD in the History and Philosophy of Science.
Latest Stories
MoreMore Articles