Algorithm helps detect brain cancer during surgery
Researchers believe model could provide reliable, real-time information.
By Nick Carne
Medical researchers have developed an algorithm they say performs as well as human pathologists in classifying surgical samples from the 10 most common types of brain cancer.
Writing in the journal Nature Medicine, a team led by Daniel Orringer from New York University School of Medicine suggests its new technique could be used to provide expert-level diagnostic information during surgery in near real-time.
The researchers combined an AI model with a laser-based optical imaging technique known as stimulated Raman histology (SRH) to diagnose brain cancer in under 150 seconds.
SRH reveals tumour infiltration in human tissue by collecting scattered laser light, illuminating essential features not typically seen in standard histologic images.
The microscopic images are then processed and analysed using AI, allowing surgeons to see a predicted brain tumour diagnosis. After the resection, the same technology is used to detect and remove an otherwise undetectable tumour.
"As surgeons, we're limited to acting on what we can see; this technology allows us to see what would otherwise be invisible, to improve speed and accuracy in the OR, and reduce the risk of misdiagnosis," Orringer says.
In a clinical trial involving 278 brain tumour patients at three hospitals, the AI-based diagnosis was 94.6% accurate, compared with 93.9% for pathologist-based interpretation.
Just as importantly, the model is always available, whereas there is often a shortage of pathologists available to provide diagnosis during surgery.
To build their model, Orringer and colleagues trained a deep convolutional neural network (CNN) with more than 2.5 million samples from 415 patients to classify tissue into 13 histologic categories that represent the most common brain tumours.
To test it, they randomly assigned specimens from their patients to a control arm (the current standard practice) or an experimental arm.
The control arm involved transport to a pathology laboratory, where specimen processing, slide preparation by technicians and interpretation by pathologists took 20 to 30 minutes. The experimental arm was performed intraoperatively, from image acquisition and processing to diagnostic prediction via CNN.
Notably, the researchers say, diagnostic errors in the experimental group were unique from the errors in the control group, suggesting that a pathologist using the novel technique could achieve close to 100% accuracy.
And the potential of this approach, they suggest, is huge.
“Although our workflow was developed and validated in the context of neurosurgical oncology, many histologic features used to diagnose brain tumours are found in the tumours of other organs,” they write.
“Consequently, we predict that a similar workflow incorporating optical histology and deep learning could apply to dermatology, head and neck surgery, breast surgery and gynaecology, where intraoperative histology is equally central to clinical care.
“Importantly, our AI-based workflow provides unparalleled access to microscopic tissue diagnosis at the bedside during surgery, facilitating detection of residual tumour, reducing the risk of removing histologically normal tissue adjacent to a lesion, enabling the study of regional histologic and molecular heterogeneity, and minimising the chance of nondiagnostic biopsy or misdiagnosis due to sampling error.”