Clinical trials – a notoriously long, risky and costly process – may be transformed by deep learning AI.
It took a pandemic to familiarise the world with the concept of a clinical trial, but even COVID couldn’t make them sexy. Fundamental to getting a drug or device to market, the process is expensive, complicated, drawn out, and prone to fail.
But well before COVID-19 was a nightly news feature, researchers around the world were working on an ambitious solution that used artificial intelligence (AI) to predict how a trial would end – well before it ever began.
Experts say these algorithms are set to radically change a clinical trial industry that works effectively the same as it did in the 1920s.
Advances in AI over the last 10 years around deep learning (an AI subsector) and emerging research on “AI explainability” are key, says Daswin de Silva, associate professor and deputy director at La Trobe University’s Centre for Data Analytics and Cognition.
“One kind of algorithm will not solve this kind of complex problem, it has to be a combination,” he says. “It is called an ensemble approach.”
Deep learning is a class of machine learning that uses multiple layers, or nodes, to learn simpler individual concepts and put them together to understand more complex ones: for example, that a human ear, eyes, nose and chin make up a face.
“The other part is the explainability of machine learning,” says de Silva. “The more complex that model is, the less explainable the outcome is… [so] if the machine learning model also has to explain how that probability came about, there is a bit of an obstacle there.”
Australian company Opyl says it’s using these methods and as a result, thinks it may have the problem licked.
But Opyl and its commercial rivals are also running into a series of problems that even the most advanced AI can’t yet solve.
Risky and expensive
Clinical trials are a three-stage process that can take up to 10–15 years: Phase I tests whether a drug or device is safe; Phases II and III test whether it works.
In 2013, the cost of bringing a therapeutic through to commercialisation was $3–4.5 billion per new approved drug, and Eroom’s Law (“Moore” backwards) holds that costs double every nine years. Success rates for taking a drug from Phase I to commercial launch are stuck at around 7%, according to a Nature study in 2019.
Those numbers are attracting corporate researchers, who are applying deep learning to analyse everything from trial location to a drug’s active ingredient, in order to inform trial design by offering a probability of success with an explanation of how it can get there.
Pfizer has tried a predictive design; US company Certara owns a Trial Simulator; and health analytics business Iqvia can “pressure test” a clinical trial before and while it’s running.
The complexity of deep learning software and the vast volumes of often proprietary data required means this is a small field and there is limited academic research underlying it.
With private companies leading research into deep learning and clinical trials, most are wrapped up in layers of commercial-in-confidence secrecy.
In Australia, Opyl too is reluctant to say exactly what is going into the software, only that it incorporates a wide range of “creative” data sources, more than 500 data points, and a combination of deep learning and explainable AI methods.
“Explainable AI is working out how the variables interact with each other,” says Opyl CEO Michelle Gallaher. “So in a protocol for a clinical trial design not all variables interact with each other but some do, and in different therapeutic areas some will be more influential than others.
“That’s when it really starts to think for itself and it starts to come up with a leap in understanding that is not coded in. This is the holy grail, this is what we’re all trying to get to in the AI space.”
Explainable AI is the next frontier as it opens the machine learning “black box” to show how it came to an answer. It only began taking off around 2017, pushed along by US defence R&D agency DARPA investments and a Chinese government plan to encourage research.
Baking in bias
Behind the wide-eyed futurism lies a large problem: bias.
All AI systems bake in bias and those fed on a diet of clinical trial data, most of which comes from the US – where all trials are required to report back whether successful or not – will be biased towards the participants in those trials.
Until now, those participants have been overwhelmingly white men, although this is beginning to change.
There’s also a risk that an AI predictor may narrow the range of drugs or devices put through clinical trials to only those with a high probability of success. For example, companies and researchers have been working on Alzheimer’s for decades and so far every treatment but four has failed.
The experts say a predictive model could enhance those issues, or fix them.
Tam Nguyen, deputy director of research at St Vincent’s Hospital, Melbourne, says new AI-based tools allowing for broader-based recruitment, as well as policy changes that force trials to include specific genders and ethnicities, will add to the data an AI predictor will have to work with.
“They need to get data from the local community… because otherwise it doesn’t make sense,” Nguyen says. “Eye disease in Australia would be different to eye disease in India, for example, and AI needs to allow for that lack of relevance to the local patient group.
“Even gender, we know that trials are being conducted on middle-aged white men and there’s ample evidence out there for AI, if used correctly, to correct that imbalance.”
Nguyen calls himself an optimist on whether a predictive system will divert resources from high-risk diseases and drugs, saying it won’t alter the priority list in important areas like age-related disease. Instead, it should allow a trial to start faster, for patients to be involved more quickly – critical for fast-moving conditions like ovarian cancer or glioblastoma – and reprioritise where money is spent.
It could even break research fixations: for three decades Alzheimer’s research was focused purely on a single theoretical cause – beta-amyloid accumulations in the brain – yet every proposed therapy has failed in clinical trials. Only in the last few years has non-beta-amyloid research been allowed access to funding.
“Potentially predictive software helps funders or venture capital [to] use some of this data to help them do part of their due diligence,” Nguyen says. “Why would you fund something that, based on the current evidence, is not working?”
COVID-19 proved that clinical trials don’t need to take years, while deep learning and explainable AI systems suggest they could be cheaper and better run too – all of which might add up to improved healthcare for all in the future.
Rachel Williamson is a business and science journalist based in Melbourne.