“Moral Machine” reveals deep split in autonomous car ethics


Huge experiment illustrates the challenge in deciding who dies in the brave new world of self-driving vehicles. Andrew Masterson reports.


Autonomous vehicles are coming, and that's probably a very good thing. However, how they should behave in a crisis is far from clear.

Autonomous vehicles are coming, and that's probably a very good thing. However, how they should behave in a crisis is far from clear.

Goldcastle7/Getty Images

An experimental online platform designed to explore moral protocols for autonomous vehicles attracted almost 40 million responses, and the results point to massive problems for roboticists, ethicists, manufacturers and policy-makers striving to find a consensus.

The exercise, dubbed the Moral Machine experiment, was conducted by a team of researchers led by Edmond Awad from the Massachusetts Institute of Technology in the US, and results are reported and discussed in the journal Nature.

The aim of the experiment was to provide some meat on the bones of an urgent ethical discussion.

Self-driving vehicles are likely to become commonplace in cities across the world in only a few years. Although the technological challenges inherent in designing such cars and trucks are being rapidly overcome, the ethical issues they create are a long way from being resolved – and, indeed, may not actually be resolvable in a way that accords with current moral paradigms.

Certainly, say Awad and his colleagues, they are never going to be solved by simplistic maxims such as those contained in Isaac Asimov’s oft-cited laws of robotics.

“Asimov’s laws were not designed to solve the problem of universal machine ethics, and they were not even designed to let machines distribute harm between humans,” they write.

“They were a narrative device whose goal was to generate good stories, by showcasing how challenging it is to create moral machines with a dozen lines of code.”

However, they add, “we do not have the luxury of giving up on creating moral machines”.

The nub of the ethical dilemma is inherent in the question of what an autonomous car should do when a circumstance arises in which harm is unavoidable. If a vehicle is barrelling along the road and something – a child, an adult, an animal – suddenly steps out in front of it, what should it do? Should it swerve to avoid the pedestrian (or animal) and thus injure or kill its passengers, or should it preserve its passengers and harm or kill the pedestrian?

And are there other factors that might affect that choice: the species, age, gender or social status of any of the players in the drama, for instance?

These, as the researchers point out, are not choices that can be wholly made by either ethicists or manufacturers. To work out, they have to accord with the moral positions of humanity – a consensus, the experimental results show, that may not exist and may be impossible to create.

In the Moral Machine game, users were required to decide whether an autonomous car careened into unexpected pedestrians or animals, or swerved away from them, killing or injuring the passengers.

The scenario played out in ways that probed nine types of dilemmas, asking users to make judgements based on species, the age or gender of the pedestrians, and the number of pedestrians involved. Sometimes other factors were added. Pedestrians might be pregnant, for instance, or be obviously members of very high or very low socio-economic classes.

All up, the researchers collected 39.61 million decisions from 233 countries, dependencies, or territories.

On the positive side, there was a clear consensus on some dilemmas.

“The strongest preferences are observed for sparing humans over animals, sparing more lives, and sparing young lives,” Awad and colleagues report.

“Accordingly, these three preferences may be considered essential building blocks for machine ethics, or at least essential topics to be considered by policymakers.”

The four most spared characters in the game, they report, were “the baby, the little girl, the little boy, and the pregnant woman”.

So far, then, so universal, but after that divisions in decision-making started to appear and do so quite starkly. The determinants, it seems, were social, cultural and perhaps even economic.

Awad’s team noted, for instance, that there were significant differences between “individualistic cultures and collectivistic cultures” – a division that also correlated, albeit roughly, with North American and European cultures, in the former, and Asian cultures in the latter.

In individualistic cultures – “which emphasise the distinctive value of each individual” – there was an emphasis on saving a greater number of characters. In collectivistic cultures – “which emphasise the respect that is due to older members of the community” – there was a weaker emphasis on sparing the young.

Given that car-makers and models are manufactured on a global scale, with regional differences extending only to matters such as which side the steering wheel should be on and what the badge says, the finding flags a major issue for the people who will eventually have to program the behaviour of the vehicles.

“Because the preference for sparing the many and the preference for sparing the young are arguably the most important for policymakers to consider, this split between individualistic and collectivistic cultures may prove an important obstacle for universal machine ethics,” write the researchers, with admirable understatement.

Policy-makers are not, they are quick to add, beholden to reflect the preferences of the Moral Machine’s 40-million-user cohort. Indeed, to do so would result in some appalling decisions, given that the results also show weak “but clear” preferences for sparing “women over men, athletes over overweight persons, or executives over homeless persons”.

Awad and colleagues hope that the results of their experiments will provide another level of solid data for the people in laboratories, factories and governments who will have to eventually sign off on a code of ethics for autonomous cars.

But it is not, they note, a matter that can be delayed. Indeed, they conclude their report on an ominous note.

“Never in the history of humanity have we allowed a machine to autonomously decide who should live and who should die, in a fraction of a second, without real-time supervision,” they write.

“We are going to cross that bridge any time now, and it will not happen in a distant theatre of military operations; it will happen in that most mundane aspect of our lives, everyday transportation. Before we allow our cars to make ethical decisions, we need to have a global conversation to express our preferences to the companies that will design moral algorithms, and to the policymakers that will regulate them.”

  1. https://doi.org/10.1038/s41586-018-0637-6
Latest Stories
MoreMore Articles