Robot cars would be a bit less robotic if they drove a bit more like us, Dutch researchers suggest. And they may have an algorithm for that.
Sarvesh Kolekar and colleagues from the Delft University of Technology developed a model they say describes driving behaviour on the basis of one underlying human principle: managing the risk below a threshold level.
“You don’t always adapt your driving behaviour to stick to one optimum path,” Kolekar says. “People don’t drive continuously in the middle of their lane, for example: as long as they are within the acceptable lane limits, they are fine with it.”
Intelligent cars aren’t fine with it, however. The current generation drives “very neatly”, as Kolekar put it, continuously and robotically searching for the safest path at the appropriate speed.
The aim of the project – described in a paper in the journal Nature Communications – was to accurately predict human behaviour during a wide range of driving tasks, in a way that might be transferable to vehicles.
The first step was to introduce the Driver’s Risk Field (DRF), an ever-changing two-dimensional field around the car that indicates how high the driver considers the risk to be at each point. Kolekar developed the risk assessments in previous research, inspired, he says, by a concept from psychology put forward in the 1930s.
The gravity of the consequences of the risk in question are then taken into account. For example, having a cliff on one side of the road boundary is much more dangerous than having grass.
“The DRF, when multiplied with the consequence of the event, provides an estimate of the driver’s perceived risk,” the authors write in their paper.
“Through human-in-the-loop and computer simulations, we show that human-like driving behaviour emerges when the DRF is coupled to a controller that maintains the perceived risk below a threshold-level.”
They then tested the model in seven scenarios: curve radii, lane widths, obstacle avoidance, roadside furniture, car-following, overtaking and oncoming traffic.
“It turned out that our model only needs a small amount of data to ‘get’ the underlying human driving behaviour and could even predict reasonable human behaviour in previously unseen scenarios,” Kolekar says. “Thus, driving behaviour rolls out more or less automatically; it is ’emergent’.”
The model is at very least an elegant description of human driving behaviour. However, the researchers are confident it has huge predictive and generalising value that could be applied to intelligent cars and even driver-assistance systems.
“If intelligent cars were to take real human driving habits into account, they would have a better chance of being accepted,” Kolekar suggests. The car would behave less like a robot.”