When an autonomous vehicle encounters a traffic hazard – and there’s no driver ready to take the wheel – the car’s actions are governed by its underlying algorithm.
The ethical dilemma, ‘Who should the Autonomous Vehicle kill?’ is often posed as a choice between two evils. An often used example is what the AV should do if a person steps in front of the vehicle, if swerving to save them would kill another road user.
But what if there was an alternative solution?
Traffic incidents are rarely clear-cut. The software which drives an AV must be ready to respond to a wide range of real-world situations and make quick decisions in the event of an impending accident.
Researchers at the Technical University of Munich have developed the first ‘ethical algorithm’ designed to weigh up the risks of different actions to different road users and make ethical decisions, affording greater protection to vulnerable road users like pedestrians and cyclists.
The study is published in Nature Machine Intelligence.
Maximilian Geisslinger, lead author and TUM scientist says, “until now, autonomous vehicles were always faced with an either/or choice when encountering an ethical decision. But street traffic can’t necessarily be divided into clear-cut, black and white situations.”
“Our algorithm weighs various risks and makes an ethical choice from among thousands of possible behaviours – and does so in a matter of only a fraction of a second.”
The approach is important because the ethics and safety of AVs is considered as a significant milestone to be met before the technology can be rolled out at scale.
The TUM approach goes beyond compliance with road laws. It seeks to modify the behaviour of the AV to minimise overall harm and protect vulnerable road users.
The algorithm’s design adopts the recommendations and parameters outlined in a European Commission expert panel report.
The algorithm incorporates a combination of five ethical principles: minimising overall risk, prioritising the worst-off, equal treatment of people, responsibility and maximum acceptable risk.
In order to translate these rules into mathematical calculations, the research team classified vehicles and persons moving in street traffic based on the risk they pose to others and on the respective willingness to take risks. A truck for example can cause serious damage to other traffic participants, while in many scenarios the truck itself will only experience minor damage. The opposite is the case for a bicycle.
They programmed the software to remain below a certain maximum acceptable risk in a wide range of scenarios, and to incorporate risk-based judgements in the vehicle’s trajectory planning. As a result, the autonomous vehicle avoids aggressive manoeuvres, and also avoids jamming on the breaks.
Their code is available as open-source software.
Originally published by Cosmos as Who should the autonomous vehicle kill? A new solution to an ethical dilemma
Petra Stock
Petra Stock has a degree in environmental engineering and a Masters in Journalism from University of Melbourne. She has previously worked as a climate and energy analyst.
Read science facts, not fiction...
There’s never been a more important time to explain the facts, cherish evidence-based knowledge and to showcase the latest scientific, technological and engineering breakthroughs. Cosmos is published by The Royal Institution of Australia, a charity dedicated to connecting people with the world of science. Financial contributions, however big or small, help us provide access to trusted science information at a time when the world needs it most. Please support us by making a donation or purchasing a subscription today.