AI beats docs in cancer spotting


A new study provides a fresh example of machine learning as an important diagnostic tool. Paul Biegler reports.


A doctor searches for signs of breast cancer – new research suggests that AI might be able to do the job faster, and more accurately.
A doctor searches for signs of breast cancer – new research suggests that AI might be able to do the job faster, and more accurately.
Getty Images

Artificial intelligence (AI) has outperformed doctors at detecting breast cancer in a new study that will further jangle the nerves of medicos, already skittish in the face of a technology whose march into medicine seems unstoppable.

The study, led by Babak Ehteshami Bejnordi at Radboud University Medical Centre in the Netherlands, reported the results of the Cancer Metastases in Lymph Nodes Challenge (also known as CAMELYON16), a competition that ran for the 12 months to November 2016.

CAMELYON16 threw down the gauntlet to researchers, who had to come up with an automated way of detecting cancer cells in lymph node biopsies from women with breast cancer.

During surgery doctors inject a radioactive tracer and blue dye into breast tissue near the tumour, which get funnelled by the lymphatic system to lymph nodes in the armpit.

Doctors can then scan the lymph nodes with a Geiger counter, and the naked eye, to find the “hot” blue-coloured node, also called the sentinel node, which is the one the cancer will spread to first.

In what is a critical procedure for the woman concerned, the node is removed, sectioned, stained, and examined by pathologists under a microscope.

No cancer is, of course, good. But cancer cells in the sentinel node indicate spread, and can mean radiotherapy or further surgery, such as a mastectomy, or removal of underarm lymph nodes, which can leave women with a swollen and sometimes useless arm.

It's an area where pathologists have room for improvement.

A 2012 study found diagnoses in nearly a quarter of biopsies were altered after review by specialist pathologists, with most upgraded to a more serious category.

In the current study, published in the Journal of the American Medical Association (JAMA), machine learning algorithms were pitted against 11 pathologists to analyse 129 sentinel node biopsies, 49 of which had cancer cells and 80 of which were clear.

The pathologists brought years of dedication and experience to the task. AI, on the other hand, was trained on just 270 digitally scanned lymph node sections. 110 of those had malignant cells, meticulously labelled by pathologists to show the machines where the cancer was.

The pathologists were given two hours to examine the slides, mimicking real life workload in the Netherlands. One further pathologist had no such time limit, and took a (comparatively) lazy 30 hours to complete the task.

The time-pressured pathologists spotted, on average, just 31 of 49 cancers, while the time-rich pathologist found 46.

The top performing algorithm, entered by the Harvard Medical School and Massachusetts Institute of Technology, returned a score on a par with the single pathologist – whose 30-hour odyssey was, the authors note, “infeasible in clinical practice” – significantly outperforming the time-poor doctors. The stunning result comes in a year where AI has insinuated itself ever more firmly in medical diagnosis.

In February, a group led by Stanford University’s Sebastian Thrun reported an algorithm that diagnosed cancer in pictures of skin lesions with a similar competence to board-certified dermatologists.

Then in April, research led by Stephen Weng at the University of Nottingham found AI did better than existing guidelines at predicting heart attack and stroke in general practice patients, a result that could lead to better targeted prevention.

In July, a study by Andrew Ng’s team at Stanford University reported a deep learning algorithm that detected heart arrhythmias, recorded on a single lead ECG, more accurately than a panel of cardiologists.

Another study led by Ng, published in November, described an algorithm that outperformed radiologists in spotting pneumonia on frontal chest X-rays.

And this month, Tien Yin Wong, of the Singapore National Eye Centre, spearheaded research in which a deep learning system examined nearly half a million retinal photographs, and found diabetic eye disease at a rate comparable to professional human graders.

The zephyr of change is, undoubtedly, ruffling some medical feathers, but the profession is being urged to embrace AI as a partner in practice, and not to see it as a threat.

In an editorial accompanying the JAMA study, Jeffrey Golden of Harvard Medical School writes: “AI and other computational methods must be integrated into all of our training programs. Future generations of pathologists must be comfortable and facile using digital images and other data in combination with computer algorithms in their daily practice.”

The advisory could well be duplicated for other areas of medicine. But the establishment, it seems, has been dilatory in heeding the call.

Writing in the New England Journal of Medicine in September, Harvard Medical School doctors Ziad Obermeyer and Thomas Lee were blunt: “Undergraduate premedical requirements are absurdly outdated. Medical education does little to train doctors in the data science, statistics, or behavioural science required to develop, evaluate, and apply algorithms in clinical practice.”

It seems fair to conclude that Deans of Medicine, and others responsible for medical curricula, should listen up.

Contribs paulbiegler 2.jpg?ixlib=rails 2.1
Paul Biegler is a philosopher, physician and Adjunct Research Fellow in Bioethics at Monash University. He received the 2012 Australasian Association of Philosophy Media Prize and his book The Ethical Treatment of Depression (MIT Press 2011) won the Australian Museum Eureka Prize for Research in Ethics.
  1. https://jamanetwork.com/journals/jama/article-abstract/2665774?redirect=true
  2. https://camelyon16.grand-challenge.org/
  3. https://www.cedars-sinai.edu/Patients/Health-Conditions/Sentinel-Blue-Lymph-Node-Biopsy.aspx
  4. https://www.ncbi.nlm.nih.gov/pubmed/22495317
  5. https://www.nature.com/articles/nature21056
  6. http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0174944
  7. https://arxiv.org/abs/1707.01836
  8. https://arxiv.org/abs/1711.05225
  9. https://www.ncbi.nlm.nih.gov/pubmed/29234807
  10. https://jamanetwork.com/journals/jama/article-abstract/2665757?redirect=true
  11. http://www.nejm.org/doi/full/10.1056/NEJMp1705348#t=article
  12. http://www.nejm.org/doi/full/10.1056/NEJMp1705348#t=article
Latest Stories
MoreMore Articles