Artificial intelligence (AI) and machine learning have the potential to transform the way healthcare is delivered, but there are also unique challenges to adopting AI technologies ethically and responsibly.
AI Ethicist Dr Melissa McCradden, the AI Director of the Women’s and Children’s Hospital Network (WCHN) in South Australia says machine learning is exceptionally good at pattern recognition and pattern recognition is important to medicine.
“It’s what helps doctors figure out what kind of problem a patient might have, or what kind of treatment is likely to work. And we use those patterns all the time to help us deliver care,” she told Cosmos.
AI tools can help clinicians detect these patterns faster and more accurately. McCradden points to work being done around using AI to assist clinicians in evaluating medical imaging, such as detecting fractures in X-rays or tumours in MRIs, or in predicting when patients might be heading for cardiac problems or developing sepsis, to facilitate early intervention.
The challenge lies in determining how to rigorously test AI tools before they are translated into clinical use, to ensure they have equivalent evidence supporting their efficacy, safety, and fairness as other medical tools.
McCradden is also the Hospital Research Foundation Clinical Research Fellow in AI Ethics at the Australian Institute for Machine Learning (AIML) at the University of Adelaide. She will be part of a panel discussing these challenges at the Australian Academy of Health and Medical Sciences Annual Meeting to be held in Adelaide this week.
One of her major research projects revolves around “translational trials”, which involve testing AI tools in live settings without affecting patient care, to make certain that an AI tool will work at a specific hospital for a specific set of patients.
She is also developing the policies, procedures, and processes to build AI tools, test them, evaluate them for impact, and then integrate them into the hospital system.
For example, mitigating bias is a major concern in AI ethics. Addressing it may involve evaluating the fairness of an AI tool by determining if it has the same level of accuracy across the population. Or, if there are certain groups of people for whom it makes more errors.
“So, in each situation we have to look at … who are the people at disadvantage, and what are the consequences of that?” says McCradden.
A false negative result, for instance, may have a markedly different impact on people living in an urban setting, are well-connected within the medical system and can readily access a hospital, compared to a person living in a rural setting, for whom it may take hours of travel to get to the nearest hospital.
“On the basis of that information, we can make a decision about how we want our AI tool to perform, or what kinds of decisions we want our clinicians to make.”
Another crucial part of the puzzle of integrating AI into the Australian healthcare system involves policy.
According to McCradden, one of the first steps involves looking at where AI can be integrated into existing standards – like The National Safety and Quality Health Service Standards, the Therapeutic Goods Administration, and the Australian Commission on Safety and Quality in Health Care – and where there is a need to develop new ones.
This is where McCradden emphasises the importance of “collective governance”.
“Many different people … have a piece of knowledge that they bring to how we govern AI effectively and a really good governance scheme requires all of those perspectives together,” she says.
This includes consumer and Indigenous values and perspectives, which McCradden says are currently underrepresented.
“The first thing that I started doing when I got [to the WCHN] was speaking with our consumers and speaking with my colleagues at the Aboriginal Health Unit,” she says.
“We need to authentically partner with Aboriginal colleagues, knowledge holders, and consumers, meeting them where they’re at and co-developing resources that are relevant to them.
“I work with consumers all the time, but I can’t stand in for a consumer because I sit within the [healthcare] system.
“And honestly, they surprise me all the time with things they say, things that I wouldn’t have thought were that important they feel are quite important. Or things that I think are quite important, them not necessarily so.
“I think it’s really about making sure that you’re giving them the right information so that they are empowered and can engage their healthcare rights in that space.”
Cosmos is an official media partner of the AAHMS Annual Meeting. You can read another preview article on digital health transformation and data sharing here.