Autonomous vehicles favour cooperation

The great moral debate around autonomous vehicles (AVs) is how they will behave in life-and-death situations. Will they sacrifice those inside to save a greater number of pedestrians, or favour the young over the old?

A less dramatic but still important question is how they will behave – or, more likely, be programmed to behave – in other situations where there is a choice between self-interest and community good.

When three US researchers asked that very question through a series of computerised experiments with 1225 volunteers, they came up with some very interesting answers.

The participants generally programmed their AVs to act more cooperatively than if they were driving themselves.  And the reason, the scientists suggest, is that programming causes selfish short-term rewards to become less salient, leading to considerations of broader social goals.

The study, published in the journal Proceedings of the National Academy of Sciences, was carried out by Celso de Meloa from the US Army Research Laboratory in California, Stacy Marsella from Northeastern University in Boston, and Jonathan Gratch from the University of Southern California, Playa Vista.

In each of the tests participants had to decide whether to think of their own comfort by using the vehicle’s air conditioner or to consider the environment and leave it off.

However, the scenario changed each time, giving them, for example, the option to amend their original choices or letting them know whether other participants were behaving competitively or cooperatively.

The results, the researchers report, came out rather like this.

The first experiment confirmed that people cooperated more when programming their AVs compared with direct interaction with others, and that this effect was not moderated by their social values.

The second showed that programming caused selfish short-term rewards to become less relevant, leading to increased cooperation, while the third revealed that this effect was robust even when participants could reprogram their vehicles during the interaction.

The fourth showed that participants adjusted their programs based on the behaviour they experienced from their counterparts, while the fifth rather significantly showed that “the effect also occurred in an abstract social dilemma, suggesting that it generalises beyond the domain of autonomous vehicles”.

“Autonomous machines that act on our behalf – such as robots, drones and autonomous vehicles – are quickly becoming a reality,” the authors write.

“These machines will face situations where individual interest conflicts with collective interest, and it is critical we understand if people will cooperate when acting through them.”

They note, however, that the study focused on situations where the decisions were made by the owners of the AVs. In practice, such decisions “may be distributed across multiple stakeholders with competing interests, including government, manufacturers and owners”.

Please login to favourite this article.