Data visualisations made more accessible to screen reader users

Data visualisation is an important part of online media, especially since the onset of the COVID-19 pandemic. But they’re often inaccessible to people who use screen readers.

A type of assistive technology, screen readers are software programs that scan the contents of a computer screen and transform it into a different format – like synthesised voice or Braille – for people with complete or partial blindness, learning disabilities, or motion sensitivity.

Now, scientists from the University of Washington (UW) in the US have designed a JavaScript plugin called VoxLens that allows people to better interact with these visualisations.

VoxLens allows screen reader users to gain a high-level summary of the information described in a graph, listen to said graph translated into sound, or use voice-activated commands to ask specific questions about the data, such as the mean or the minimum value.

The team presented their research last month at the ACM CHI Conference on Human Factors in Computing Systems in New Orleans in the US.

“If I’m looking at a graph, I can pull out whatever information I am interested in – maybe it’s the overall trend or maybe it’s the maximum,” says lead author Ather Sharif, a doctoral student in the Paul G. Allen School of Computer Science & Engineering at UW.

“Right now, screen reader users either get very little or no information about online visualisations, which, in light of the COVID-19 pandemic, can sometimes be a matter of life and death. The goal of our project is to give screen reader users a platform where they can extract as much or as little information as they want.”

The difficulty with translating graphs, according to co-senior author Jacob O. Wobbrock, a professor of information at UW, comes from deciphering information with no clear beginning and end.

“There is a start and an end of a sentence and everything else comes in between,” he explains. “But as soon as you move things into two dimensional spaces, such as visualisations, there’s no clear start and finish.

“It’s just not structured in the same way, which means there’s no obvious entry point or sequencing for screen readers.”

Working with screen reader users to improve accessibility

The team worked with screen reader users who had partial or complete blindness when designing and testing the tool. During the testing phase, participants learned how to use VoxLens and then completed nine tasks, each of which involved answering questions about a data visualisation.

The researchers found that participants completed the tasks with 122% increased accuracy and 36% decreased interaction time, compared to participants of a previous study who hadn’t had access to VoxLens.

“We want people to interact with a graph as much as they want, but we also don’t want them to spend an hour trying to find what the maximum is,” says Sharif. “In our study, interaction time refers to how long it takes to extract information, and that’s why reducing it is a good thing.”

VoxLens can be implanted easily by data visualisation designers with a single line of code. Right now it only works for visualisations created using JavaScript libraries – such as D3, chart.js or Google Sheets – but the team is working towards expanding to other popular platforms.

“This work is part of a much larger agenda for us – removing bias in design,” adds co-senior author Katharina Reinecke, associate professor in the Paul G. Allen School of Computer Science & Engineering at UW. “When we build technology, we tend to think of people who are like us and who have the same abilities as we do.

“For example, D3 has really revolutionised access to visualisations online and improved how people can understand information. But there are values ingrained in it and people are left out. It’s really important that we start thinking more about how to make technology useful for everybody.”

Please login to favourite this article.