Getting some SVO for your (auto) SUV

Hands up who’s keen, hankering, positively slavering, for the introduction of self-driving, or autonomous, cars. 

Seriously, engineers are telling us that sometime soon, we’ll be able to sit in the back seat of our very own robocar (I’m going to call mine something completely off-trend  – think “Harvey” of the moment) and read the newspaper (I’ve just given away my age), or send texts (like, legally) or engage in conversation with a fellow passenger that involves actual and long-lasting eye contact.

All while stuck in a stinking, rotten peak-hour traffic-jam (Harvey’s fully electric, BTW – so smug) that’s as incidental as clouds drifting across an autumn-afternoon sky.

What’s not to like about this? Well, it happens there are a couple of bugs to iron out in regards to self-driving cars.

For all their fancy sensors and intricate data-crunching abilities, even the most cutting-edge autonomous cars lack something that (almost) every teenager with a learner’s permit has: social awareness. 

While autonomous technologies have improved substantially, they still ultimately view the drivers around them as obstacles made up of ones and zeros, rather than human beings with specific intentions, motivations, and personalities (not to mention breakable bones, soft tissue, and essential fluids that tend to leak in response to traumatic injuries).

Recently, a team led by researchers at the Massachusetts Institute of Technology’s Computer Science and Artificial Intelligence Laboratory (CSAIL) started exploring whether self-driving cars can be programmed to classify the social personalities of other drivers.

This would allow them to better predict what different cars will do – and, therefore, be able to drive more safely among them.

In a paper published in the journal Proceedings of the National Academy of Sciences (PNAS), the scientists reveal how they integrated tools from social psychology to classify driving behaviour with respect to how selfish or selfless particular drivers are.

Specifically, they used something called Social Value Orientation (SVO), which represents the degree to which someone is selfish (“egoistic”) versus altruistic or cooperative (“prosocial”). The system then estimates drivers’ SVOs to create real-time driving trajectories for self-driving cars.

191119 driverless cropped
MIT CSAIL

The team tested their algorithm on the tasks of merging lanes and making unprotected (across traffic) turns.

They showed that they could better predict the behaviour of other cars by a factor of 25%. For example, in the across-traffic-turn simulations, their car knew to wait when the approaching car had a more egoistic driver and to then make the turn when the other car was more prosocial.

The system isn’t yet sufficiently robust to be implemented on real roads, but it could have some intriguing applications, and not just for cars that drive themselves.

Say you’re a human driving along and a car suddenly enters your blind spot. The system could give you a warning in the rear-view mirror that the blind-spot car has an aggressive driver. Duly warned, you’d adjust accordingly.

It could also allow self-driving cars to actually learn to exhibit more human-like behaviour, which will be easier for human drivers to understand.

“Working with and around humans means figuring out their intentions to better understand their behaviour,” says Wilko Schwarting, the paper’s lead author.

“People’s tendencies to be collaborative or competitive often spills over into how they behave as drivers. In this paper we sought to understand if this was something we could actually quantify.”

A central issue with today’s self-driving cars is that they’re programmed to assume that all humans act the same way. This means that, among other things, they’re quite conservative in their decision-making.

While this caution reduces the chance of fatal accidents, it also creates bottlenecks that can be frustrating for other drivers, not to mention hard for them to understand. {%recommended 9947%}

“Creating more human-like behaviour in autonomous vehicles [AVs] is fundamental for the safety of passengers and surrounding vehicles, since behaving in a predictable manner enables humans to understand and appropriately respond to the AV’s actions,” says Schwarting.

To try to expand the car’s social awareness, the CSAIL team combined methods from social psychology with game theory, a theoretical framework for conceiving social situations among competing players.

The team modelled road scenarios where each driver tried to maximise their own utility and analysed their “best responses” given the decisions of all other agents. Based on that small snippet of motion from other cars, the team’s algorithm could then predict the surrounding cars’ behaviours as co-operative, altruistic, or egoistic — grouping the first two as “prosocial.”

People’s scores for these qualities rest on a continuum with respect to how much a person demonstrates care for themselves versus care for others.

The system was trained to try to better understand when it’s appropriate to exhibit different behaviours.

For example, even the most deferential of human drivers knows that certain types of actions – such as changing lanes in heavy traffic – requires a moment of being more assertive and decisive.

For the next phase of research, the team plans to work to apply its model to pedestrians, bicycles and other agents in driving environments. 

“By modelling driving personalities and incorporating the models mathematically using the SVO in the decision-making module of a robot car, this work opens the door to safer and more seamless road-sharing between human-driven and robot-driven cars,” says co-author Daniela Rus.

Please login to favourite this article.