Fatality warning road signs increased accidents in Texas

Signs o’ the times: the traffic warnings that didn’t work

We all know times when our best efforts to do something positive can go awry. We try to comfort a friend going through a hard time… and inadvertently make it worse. We congratulate someone on a new job – only to find out it wasn’t working out for them as well as they’d hoped. Try as we might, our best intentions sometimes misfire.

Now, scientists say, the same appears to apply to a popular class of traffic safety messages: ones intended to reduce accidents by warning drivers about the inherent dangers of driving.

Such messages, which have popped up on signs in 28 American states and then spread to other countries are designed to alert motorists to the risks they face every time they take the wheel, starkly proclaiming such messages as: “1759 people have died this year on Illinois roads.”

It sounds like a good idea. But, a new study in Science says, these messages don’t appear to reduce accidents. Instead, they actually increase them.

The study had its origins in 2012, when corresponding author Joshua Madsen, an accounting professor at the University of Minnesota, was visiting Chicago, Illinois. “I was driving on the freeway and saw a traffic fatality message – something I’d never seen before,” he says. “Then I saw another in Nashville, Tennessee.”

Intrigued, he teamed up with Jonathan Hall, now an economist at the University of Toronto, Canada, and set out to try to figure out if such signs were actually effective.

It was becoming clear that the signs not only didn’t work, they increased crash rates by about 4.5%

It took a while to figure out how to do so, because these messages tend to appear on large digital displays, called dynamic message signs (DMSs) that are also used for other purposes, such as advising about road construction, traffic delays, or other pertinent information about the route ahead. The warnings only appear at times when the signs aren’t needed for something more urgent, making it very difficult to determine their impact.

Eventually, however, Madsen discovered that the state of Texas had a unique policy of displaying these warnings one week each month: the week before the Texas Department of Transportation’s monthly board meeting. Why they did it that way isn’t fully clear, but it made a perfect natural laboratory for Madsen and Hall’s test because it meant they knew exactly when these messages were being displayed.

With that information, they could tabulate their effect on the number of accidents downstream from them by comparing statistics for weeks when the warning was being displayed to those from weeks when it wasn’t. It was the perfect controlled study, because the only thing that changed from the “on” weeks to the “off” weeks was the message.

Fatality hazard road sign
Fig 1. shows that there are more crashes during DMS campaign weeks than in other weeks. Credit: Jonathan D. Hall, Joshua M. Madsen / Science

The data could even be broken down day-by-day, hour-by-hour if desired. “[O]ur estimates compare, for example, the number of crashes within 10km downstream of a DMS from 2:00 to 3:00 p.m. on Thursday 18 July (which occurred during the week before a board meeting) against the number of crashes on the same road segment from 2:00 to 3:00 p.m. on the other three Thursdays in July 2013,” Madsen and Hall wrote in their paper.

Traffic conditions might vary from one day to another, but Texas is a large state with nearly 900 such signs, and by aggregating data from all of them, such variations even out.

“It is just a goldmine,” Madsen says.

It was also a mammoth exercise in data collection. “I started in 2013,” he says. “We have been working on it ever since.”

When the data started to roll in, he says, “it was thrilling and terrifying at the same time” – because it was becoming clear that the signs not only didn’t work, they increased crash rates by about 4.5% in the next 10 kilometres downstream. That was thrilling because it was clearly an important finding. It was terrifying because it contradicted the widespread presumption that such sobering messages would be… well… sobering, and therefore make people drive more carefully. It’s not the type of result you announce lightly.

Madsen thinks that the reason the signs backfired is probably that they were too shocking (though he prefers the word “salient”).

“The concept of death is much more salient than ‘drive sober’ or ‘click it or ticket’…[It] has its own special place in human existence.”

Joshua Madsen, University of Minnesota

“The concept of death is much more salient than ‘drive sober’ or ‘click it or ticket,’ he says. “[It] has its own special place in human existence.”

That special place, he says, may mean that the signs grabbed attention in the wrong way, making people ponder their own mortality rather than doing what they were being encouraged to do: drive more safely. Either that, or perhaps a lot of people don’t really know how to drive more safely and start over-compensating by doing things like braking more strongly than necessary when they need to slow down. “That is one possible explanation,” he says of the latter.

Gerald Ullman, a transportation engineer at Texas A&M University, agrees that Madsen and Hall have very much made their statistical case. Their data, Ullman and colleague Susan Chrysler say in an analysis article in the same issue of Science, clearly shows that the signs have a negative effect.

But he doesn’t think Madsen and Hall have proven that it’s fear of death that’s causing it. “I think there are other plausible explanations,” he told Cosmos.

More on road accident psychology: Working memory linked to road accidents

For example, he says, maybe the signs are simply one more distraction to drivers already engaged in the complex process of navigating freeways, often in highly urban areas. “We know from research we’ve been doing for 50-some years that information loading is a complex process,” he says. Adding to that via these warning signs may just be creating a bit of ill-timed information overload.

Madsen counters by diving deeper into his dataset. His data don’t just show the effect of the overall effect of warning signs: he can relate that effect to the number of traffic deaths stated on them.

That’s because Texas resets that number every year. So, the number starts out low, then steadily rises, typically capping out between 3,000 and 4,000. When he plots those numbers rise against the degree to which downstream accidents are affected, he finds a strong linear relationship. “I find that one of the most interesting and informative [of our] findings,” he says. “The bigger the number displayed, the more crashes that will occur in that month.”

Fatality hazard road sign
 Fig. 4 shows, the effect of displaying a fatality message drops 11 percentage points between January and February, when the displayed number of deaths resets. Credit: Jonathan D. Hall, Joshua M. Madsen / Science

The best explanation for that, he says, is that larger fatality numberers are more frightening and therefore more likely to make people start thinking about the wrong things. (He notes that the best way to prove this would be to put people in a vehicle simulator and monitor their brain activity as they encounter such signs, something he’s not yet been able to get the equipment to attempt.)

Ullman still isn’t persuaded. The size of the number on the signs might well be a contributing factor, he says, but that doesn’t mean larger numbers are triggering more fear of death. They might just be too big to easily digest. Humans do a good job of dealing with small numbers, he says. But when they get large and complex? Those take more cognitive power to interpret, regardless of whether they are about fatalities, distance driven, or anything else, he says.

The best explanation for that, he says, is that larger fatality numberers are more frightening and therefore more likely to make people start thinking about the wrong things.

One thing Madsen and Ullman fully agree on, however, is that the study demonstrates the importance of thinking before implementing seemingly good ideas, willy-nilly. “The big takeaway is that it is important to make sure we are very careful about how much [information] we present, and how we present it.” Ullman says.

Madsen adds that it is also important to find ways to test such ideas in ways that allow the collection of good data – not just for traffic safety, but for pretty much anything else for which such tests might be relevant. If we’re trying to make the world better, he says, let’s do it in a way that we can quantifiably measure it “and see how well it’s doing”.

After all, if Texas hadn’t accidentally created the perfect experiment, it’s doubtful anyone could ever have figured out that these popular warnings actually had negative effects.

Please login to favourite this article.