Researchers warn medical AI is vulnerable to attack

The risk of “medical error” takes on a new and more worrying meaning when the errors aren’t human, but the motives are.

In an article published in the journal Science, US researchers highlight the increasing potential for adversarial attacks to be made on medical machine-learning systems in an attempt to influence or manipulate them.

Due to the nature of these systems, and their unique vulnerabilities, small but carefully designed changes in how inputs are presented can completely alter output, subverting otherwise reliable algorithms, the authors say.{%recommended 7197%} 

And they present a stark example – their own success using “adversarial noise” to coax algorithms to diagnose benign moles as malignant with 100% confidence.

The Boston-based team, which was led by Samuel Finlayson from Harvard Medical School, brought together specialists in health, law and technology.

In their article, the authors note that adversarial manipulations can come in the form of imperceptibly small perturbations to input data, such as making “a human-invisible change” to every pixel in an image. 

“Researchers have demonstrated the existence of adversarial examples for essentially every type of machine-learning model ever studied and across a wide range of data types, including images, audio, text, and other inputs,” they write.

To date, they say, no cutting-edge adversarial attacks have been identified in the healthcare sector. However, the potential exists, particularly in the medical billing and insurance industry, where machine-learning is well established. Adversarial attacks could be used to produce false medical claims and other fraudulent behaviour. 

To address these emerging concerns, they call for an interdisciplinary approach to machine-learning and artificial intelligence policymaking, which should include the active engagement of medical, technical, legal and ethical experts throughout the healthcare community.

“Adversarial attacks constitute one of many possible failure modes for medical machine-learning systems, all of which represent essential considerations for the developers and users of models alike,” they write. 

“From the perspective of policy, however, adversarial attacks represent an intriguing new challenge, because they afford users of an algorithm the ability to in-fluence its behaviour in subtle, impactful, and sometimes ethically ambiguous ways.”

Please login to favourite this article.