Imagine a condition that leaves you fully conscious, but unable to move or communicate, as some victims of severe strokes or other neurological damage experience. This is locked-in syndrome, when the outward connections from the brain to the rest of the world are severed. Technology is beginning to promise ways of remaking these connections, but is it our ingenuity or the brain’s that is making it happen?
{%recommended 1009%}
Ever since an 18th-century biologist called Luigi Galvani made a dead frog twitch we have known that there is a connection between electricity and the operation of the nervous system. We now know that the signals in neurons in the brain are propagated as pulses of electrical potential, whose effects can be detected by electrodes in close proximity. So in principle, we should be able to build an outward neural interface system – that is to say, a device that turns thought into action.
In fact, we already have the first outward neural interface system to be tested in humans. It is called BrainGate and consists of an array of micro-electrodes, implanted into the part of the brain concerned with controlling arm movements. Signals from the micro-electrodes are decoded and used to control the movement of a cursor on a screen, or the motion of a robotic arm.
A crucial feature of these systems is the need for some kind of feedback. A patient must be able to see the effect of their willed patterns of thought on the movement of the cursor. What’s remarkable is the ability of the brain to adapt to these artificial systems, learning to control them better.
Virtual reality
Inward neural interfaces – ones that provide inputs to the brain – also depend on the brain’s ability to adapt to them. Cochlear implants, which can restore some measure of hearing to the profoundly deaf, have been around for several decades now. These take signals from an external microphone, and after signal processing, transmit a series of pulses to electrodes that excite the auditory nerve. The pulses are designed to mimic the way different frequencies are encoded by a functioning cochlea, but the match is imperfect, and the restoration of the ability to understand speech, for example, depends on the brain’s impressive ability to learn to adapt to the new kinds of input.
The first trials of retinal implants have now taken place, in which signals from a camera are used to stimulate retinal neurons in vision-impaired patients. Second Sight’s Argus II system shows some encouraging results, with patients able to pick out shapes and detect the motion of objects. For the first time, people who have become blind due to the degeneration of their own photoreceptor cells – which convert light into signals in the eyes – can have some measure of artificial vision restored.
The key message of all this is that brain interfaces now are a reality and that the current versions will undoubtedly be improved. In the near future, for many deaf and blind people, for people with severe disabilities – including, perhaps, locked-in syndrome – there are very real prospects that some of their lost capabilities might be at least partially restored.
Until then, our current neural interface systems are very crude. One problem is size; the micro-electrodes in use now, with diameters of tens of microns, may seem tiny, but they are still coarse compared to the sub-micron dimensions of individual nerve fibres. And there is a problem of scale. The BrainGate system, for example, consists of 100 micro-electrodes in a square array; compare that to the many tens of billions of neurons in the brain. The fact these devices work at all is perhaps more a testament to the adaptability of the human brain than to our technological prowess.
Scale models
So the challenge is to build neural interfaces on scales that better match the structures of biology. Here, we move into the world of nanotechnology. There has been much work in the laboratory to make nano-electronic structures small enough to read out the activity of a single neuron. In the 1990s, Peter Fromherz, at the Max Planck Institute for Biochemistry, was a pioneer of using silicon field effect transistors, similar to those used in commercial microprocessors, to interact with cultured neurons. In 2006, Charles Lieber’s group at Harvard succeeded in using transistors made from single carbon nanotubes – whiskers of carbon just one nanometer in diameter – to measure the propagation of single nerve pulses along the nerve fibres.
But these successes have been achieved, not in whole organisms, but in cultured nerve cells which are typically on something like the surface of a silicon wafer. It’s going to be a challenge to extend these methods into three dimensions, to interface with a living brain. Perhaps the most promising direction will be to create a 3D “scaffold” incorporating nano-electronics, and then to persuade growing nerve cells to infiltrate it to create what would in effect be cyborg tissue – living cells and inorganic electronics intimately mixed.
This prospect might be achievable in our lifetimes, but what does remain very far away is the transhumanist dream of being able to obtain a complete readout of the brain – a transcript of the state of the mind. Neural interfaces will remain only the narrowest and most partial of windows on the huge complexity of the inner life of a brain, though even that partial window will be life-transforming for some.
{%recommended 1034%}
As brain interfaces improve, they will bring real benefits to many, and some ethical issues too. As the techniques become more routine, it’s likely that people will find non-medical uses for them. We might find ourselves controlling computer games, or taking direct control of machines at work. We will still be a long way from the seamless integration of humans and machines, but the science fiction vision of the cyborg will become real enough to give us pause for thought.
This piece is co-published with the World Economic Forum as part of its Final Frontier series. You can read more here.
Richard Jones, Professor of Physics, University of Sheffield
This article was originally published on The Conversation and republished here with permission. Read the original article.