How healthcare is tackling the problem of fairness in AI data

The risks and ethical considerations associated with the data used to train Artificial Intelligence (AI) in health care have been outlined by an international expert at a health conference in Adelaide.

The problem, says Professor Alastair Denniston, director of the Centre for Regulatory Science and Innovation at the University of Birmingham, in the UK, is that there is a fairness gap.

Denniston was giving a keynote address at the Australian Academy of Health and Medical Sciences (AAHMS) 2024 Annual Meeting, yesterday in Adelaide, South Australia.

A photograph of a man with brown hair wearing a navy suit. He is presenting at a podium in front of an audience.
Alastair Denniston during his keynote speech at the Australian Academy of Health and Medical Sciences (AAHMS) 2024 Annual Meeting in Adelaide, Credit: Imma Perfetto

“There is increasing awareness of a fairness gap and AI bias … we’re seeing this all the  time in the headlines. But what does this look like for health? What does the evidence say?”

Denniston described the problem of health data poverty, the: “…inability for individuals, groups or populations to benefit from data driven discoveries and innovations due to insufficient data that are representative of them.

“Bias can come into AI anywhere from concept and priority setting all the way through to post deployment,” he said.

“We’ve focused on one area in particular … data considerations.

“Data sets that either are currently being used for development of AI or [are] publicly available … these data sets are poorly diverse, they only represent certain parts of the world, often the more affluent ones.

“Not only do we have, potentially, data sets that are poorly representative of a rich diversity of our populations, but actually we don’t even know who’s in those data sets.

“You will assume that it’s probably mainly representing the majority population, but it’s really hard to purposefully look to see whether underrepresented groups are there and how an algorithm performs in most people.”

Denniston highlighted a large study which found that a widespread algorithm predicting acute kidney injury performed well in men but significantly worse in women. And a large study which found that an algorithm used to predict healthcare need, and therefore supply access to healthcare, was significantly disadvantaging black Americans.

“If we roll this out at scale, we are basically amplifying bias across society,” he said.

In response to this issue, an initiative was established in 2021 to develop standards for data diversity, inclusivity, and for AI generalisability. STANDING Together is a partnership with representatives from mainly the UK, the US, Canada and Australia, and the World Health Organisation.

These professionals represent more than 30 academic, regulatory, policy, industry, and charitable organisations to develop recommendations for both dataset builders and for dataset users.

“Our emphasis has always been on transparency. It’s not trying to pretend we can make things perfect,” Denniston said.

“But if we go with transparency, then people can make choices. We can see, okay, we’ll use this data set for this application, but we also recognise that there are these limits, and we need to be careful around the edges and the limits of interpretation.”

Denniston says the standards have been widely adopted since their release.

“In fact, the first regulator to formally adopt it after launch was the Therapeutic Goods Administration [in Australia],” he says.

Cosmos is an official media partner of the AAHMS Annual Meeting. You can read an article on how healthcare must transform in the face of climate change here.

Buy the cosmos emag now

Please login to favourite this article.