Neurotech may destroy your privacy and rights

At the 2016 Code Conference, held last June in Rancho Palos Verdes in California, US, Tesla and SpaceX founder Elon Musk made a stuttering, almost offhand pronouncement that proceeded to reverberate, far more emphatically, across global media outlets.

Musk predicted that, should AI continue its current trajectory, humans would be left so far behind we would become like pets – “house cats” – to our Artificial Intelligence (AI) overlords. 

Following up in May this year, the tech billionaire launched Neuralink, a company whose mission is to develop a brain-computer interface (BCI) or “neural lace” that would allow humans to communicate with computers at the speed of thought, augmenting our intelligence to the point where we could keep pace with any rampant AI.

There is, however, another threat for which we may be equally ill-prepared, according to a commentary by a group of neuroscientists, clinicians and ethicists, known as the Morningside group, published in the journal Nature this week. That threat comes from the brain devices themselves.

The group argues that august codes of ethics such as the Declaration of Helsinki, the Belmont Report, and even the Asilomar AI Cautionary Principles, ratified by nearly 4000 signatories in January including Stephen Hawking and Musk himself, are no longer fit for purpose in a world where the capabilities, and risks of BCI are advancing almost weekly.

Implants that can record neural activity and stimulate the brain are becoming more common in the treatment of epilepsy and Parkinson’s disease. Similar devices allow paralysed people to move a computer cursor, type, and use a robotic arm, all with just their thoughts. And algorithms are pushing inexorably closer to reading thoughts, already discerning words people are merely imagining saying.

The upshot, the authors point out, is that privacy concerns, already front and centre with data-hungry apps, will multiply should organisations gain direct access to neural data and, potentially, be able to manipulate your “mental experience”. 

Imagine proactive insurers hiking your premiums when they learn you intend to go hang gliding, or advertisers switching on your reward regions when they detect that you’re looking at their products.

The fix, argue the authors, is to make opting-out of neural data collection the default. And legislation must strictly regulate commercial transfer and use of neural data. 

The group has other concerns that strike even deeper at the core of our humanity. Consider, they write, a person with a BCI-controlled robotic arm who gets frustrated, crushes a cup and injures an assistant in the process. Is the man, or the device, to blame?

That issue is so significant, the authors believe, that we need a new set of rights – “neurorights” – to ensure that brain technologies don’t blur our sense of identity, or agency; an understanding of which actions are under our direct control.

They also worry that the armed forces might use BCI to create super intelligent soldiers and set off an “augmentation arms race”, an eventuality that ought, they argue, to be headed off by a United Nations-led moratorium. 

An equally insidious concern was brought to light in 2016 when an investigation found an algorithm used by US law enforcement falsely inflated the risk of recidivism among black defendants. The group warns that such biased algorithms, reflecting those of the developers themselves, require countermeasures that must become “the norm for machine learning”.

The authors concede they face a formidable foe: “history indicates that profit hunting will often trump social responsibility in the corporate world,” they write.

But there is one field, they note, that already charts a course through the oft-conflicted waters of profit and ethical practice, and might offer a template for change: the profession of medicine.

In future, just maybe, the boardrooms of Silicon Valley might echo with AI’s very own version of the Hippocratic Oath.

Please login to favourite this article.