In August 2013 my first cover of Cosmos magazine – Decode your brain – suggested we were on the verge of a new era of understanding how the brain works. So, as we approach the finish line on yet another “decade of the brain” – one in which multi-billion-dollar campaigns have sought to reveal the workings of the most complex thing in the universe – it seems a good time to ask: are we there yet?
Previous “decades of the brain” had promised a similar neural booty, such as new treatments for schizophrenia or spurring the development of AI.
But this campaign was on steroids. For one thing, it was a twin effort.
In early 2013, Europe launched its flagship “Human brain project”, the goal of which was to simulate a human brain inside a computer within a decade. Spoiler alert: it didn’t happen. (For more on how that project panned out, see here.)
The US countered with President Obama’s BRAIN initiative, tasked with more pragmatic aims, such as developing circuit diagrams of the brain.
Overall, neuroscientists were armed with techniques that even two decades earlier had been the stuff of sci-fi.
Think microscopes that peer into the brain of a living animal to record signals from individual brain cells or neurons. Add the ability to switch particular brain circuits on or off with pinpoint precision using a pulse of light, a technique called optogenetics. Mix in brain atlases with the dynamic resolution of Google maps – from full relief technicolour maps of the wrinkled brain surface down to cellular structures. Add circuit diagrams dubbed ‘connectomes’ to satisfy the most exacting electrical engineer. Just as geneticists needed the genome – the complete genetic code – to understand the logic of life, so too the connectome would underpin the logic of brain function.
This amazing bag of tricks has enabled researchers to begin the task of linking the electrical signals in brain circuits to such elusive things as behaviour. “The last decade has seen us move towards closing the explanatory gap,” says neuroscientist Professor John Bekkers at the Australian National University, who has spent his career analysing brain circuitry.
So: do we finally understand the brain?
It’s a meme that the brain tends to be understood in terms of the most advanced technology of the day.
For imperial-era Romans, the brain was an aqueduct. The 17th century natural philosopher René Descartes saw it as a hydraulic machine like the ones that moved statues in Versailles. Among late 19th century folk it was comparable to a telephone exchange. The 20th century finally nailed it: the brain is a computer. It takes in information, stores and processes it and delivers an output; moreover, it does so by sending electrical signals through its circuits.
Of course it doesn’t have the architecture of 20th century computers. It’s more akin to what the 21st has delivered – the neural networks that recognise faces in smartphones, and have now given us chatbots that pass the Turing test. It’s no surprise that brain and bots share similarities in their architecture – these machines were modelled on our brain architecture in the first place.
Still, artificial neural networks are a crude facsimile of a human brain, whose 80 billion neurons and 100 trillion connections give rise to our perceptions, intelligence, emotions, and consciousness. “It’s the complexity of scale”, notes Professor Gerry Rubin, director of Janelia research campus in Washington, where the imaging techniques to spy on individual brain cells were developed and the fruit fly connectome project began.
Moreover, he says the brain computer may not be logical. “It was built by evolution; not an engineer. The analogy I like is you went from the Ford model T into a Maserati without ever being able to turn the engine off.”
Brains are also famously more efficient at learning than artificial neural networks are – something that Dr Brett Kagan and his colleagues at Cortical Labs in Melbourne are exploring in ‘dish brain’ – human brain cells that can learn to play PONG.
So yes the brain is fantastically complex and different from the computer on your desk. But if the goal is to decode brain signals, we’ve made some fantastical strides – at least, if the last decade of headlines are anything to go by.
Neuroscientist Professor Jack Gallant at the University of California, Berkeley, can tell what movie a person is watching by decoding their brain waves on an fMRI machine. Dr Joseph Makin at the University of California, San Francisco, inserted electrodes into the brain of epileptic patients undergoing diagnostic tests and converted their thoughts to text. Professor Doris Tsao, also at UC Berkeley, can identify the face a monkey is looking at by reading signals directly from wires connected to 205 brain cells.
These examples seem to proclaim loud and clear: we are learning how to crack brain codes.
Yet according to neuroscientist Professor Karel Svoboda, at the Allen Institute in Seattle, these are “parlour tricks”.
Tsao agrees: “What we know about the brain, including my own work, is trivial.”
No doubt, these brain scientists are reprising the age-old truism articulated by explorers since Aristotle: the more you know, the more you know you don’t know. As Tsao admits, “whatever you already understand is trivial and boring”.
Nevertheless, as neuroscientists peel back the curtain on old mysteries, new vistas open up – both down into the molecular details and up into the vast cloud of emergent properties.
For Svoboda, the direction of interest points deeper into the underlying circuitry.
When he refers to mind-reading breakthroughs as “parlour tricks”, he means they are relying on correlation. Researchers like Gallant and Makin record brain signals when a person is watching a movie or articulating words, then feed them into a machine-learning algorithm that learns to associate the patterns, in much the same way these algorithms learn to detect cats on your phone.
For Svoboda – who aims for nothing less than reverse engineering the circuitry of the brain – that’s not very informative. He and his colleagues are making headway in reverse engineering the mystery of short-term memory: the type that allows you to remember 10 digits long enough to tap them into your phone or follow instructions to turn right or left at the next street.
The mystery of long-term memory, that needed to permanently remember the phone number, was at least partially revealed in the 1960s. When neurons fire together in synchrony – say five times a second for several seconds – that solders them together in a circuit. That soldering was dubbed “long-term potentiation” and its occurrence was proven by optogenetics in 2014.
Short-term memory has remained more elusive. One long-standing theory proposed by physicist Professor John Hopfield in the 1980s suggested it involved reverberating signals between a set of neurons – as if they were humming a tune.
Svoboda has now provided evidence that this actually takes place. A recent experiment in his lab conducted by Kayvon Daie found that a circuit of some 50 neurons in the anterior lateral motor cortex (a part of the brain known to make decisions about movement) held the memory of whether a mouse should lick left or right to get a drink of water. Daie trained the mice with a musical tone. A high pitch meant a drink of water lay to the right; a low pitch on the left. He observed what was happening in the mouse’s brain via a tiny window in its skull that had a microscope attached to it, with a view of about 500 cells. Thanks to some nifty genetic engineering, every time a neuron fired a signal, it flashed a fluorescent light detected by the microscope. (Calcium is released when a neuron fires and the neurons were fitted with a calcium sensitive dye). Just prior to each lick, some 50 neurons fired together for a few seconds with a particular frequency, like a recognisable hum. Could they be encoding the memory of licking right?
To check, Daie artificially stimulated those same neurons with a light switch. The mice licked right. Away from the mice, Daie retreats to his computer to test models that describe how these networks behave. One of the most promising is the “attractor model”, which theorises that these circuits establish pre-existing templates to help complete a memory. This may be linked to our minds’ strong tendency to complete patterns or see a face in a cloud.
Another stunning case of reverse engineering behaviour comes from fruit flies. Insects have legendary navigational abilities: bees unerringly navigate their way from the hive back to a food source and communicate its whereabouts to others via their waggle dance; foraging desert ants navigate across hundreds of metres of featureless landscape back to their nest.
Fruit flies aren’t quite in the same league but they have the basic navigational kit. Along with colleagues at Rockefeller University, Professor Larry Abbott – a physicist-turned-neuroscientist based at Columbia University – has decoded it. Fruit flies held in place by miniature harnesses roamed a virtual environment while a microscope attached to their heads recorded the activity of individual brain cells.
It turns out that to keep track of where they were, fruit flies carried out a mathematical calculation taught to high school students: vector addition. The ability to reverse engineer the circuitry of a navigating fly relied on having the entire wiring diagram, the connectome.
“In flies we can now test theories with the precision I was used to in physics,” enthuses Abbott.
Fly brains are a very long way from ours, but that doesn’t mean they won’t hold compelling lessons for decoding the human brain: evolution tends to re-use good inventions. A dramatic example is the eyeless gene, first discovered because it was crucial for the development of the fruit fly eye. It was subsequently found to be crucial to human eye development too.
For Professor Stephen Smith at the Allen Institute, the compelling questions are even more fine-grained. Smith has spent much of his career focussed on the synapse, the place where connections between neurons are strengthened or weakened. Like sprawling tentacles, a single neuron is equipped with thousands of incoming synapses, each relaying a hopeful message from another neuron. Whether or not a neuron will accept that invitation to join part of a brain circuit is determined by what happens at the synapse. And that, believes Smith, is determined by the genes in play there. It turns out that neurons use more genes than any other type of cell. Hundreds of chemical messengers called neuropeptides are deployed at the synapse, different ones in each of the 4000 or so neuronal cell types that Smith has analysed. It is these neuropeptides he believes that determine which neurons will link into circuits, like those that underlie short term memory.
Other researchers are champing at the bit to leap to higher dimensions.
Tsao’s work to date has reverse engineered how the monkey brain reads faces. She found it takes only 205 neurons in a region of the brain called the inferotemporal cortex to encode a face. The neurons are arranged in six face patches. Cells in each patch are tuned to a different facial feature. Some act like rulers to measure the distance between the eyes; others detect the face’s orientation – is it looking left or right? Yet others are tuned to the colour of the eyes or hair. In a process reminiscent of the way a detective assembles an identikit, it is the combined information delivered by these face patch cells that lead the monkey, or Tsao, to identify a specific face, regardless of its orientation. The coding logic seems to generalise to other functions of the IT cortex, such as identifying whether an object is animate (like a cat) or inanimate (like a box).
“It’s a beautiful example of encoding, neuron by neuron,” says Abbott. “A few years back, I’d have said nobody’s ever going to get there.”
Tsao feels less triumphant. “I don’t think any principle we understand is interesting yet,” she says. “Everything we understand really well is still feed forward, wiring more and more complex feature detectors. I think there has to be something more to the brain than that.”
For Tsao the next goal is to figure out how the brain puts it all together – what she refers to as the “outer loop” of the brain computer. How, for instance, does the brain bind perceptions together in three-dimensional space to give us a model of the world? Tsao suspects that the posterior parietal cortex (a region lying roughly at the top of the primate brain towards the back) will hold clues.
Professor Ethan Scott at the University of Melbourne may beat her to it – in a juvenile zebrafish. In this transparent fish, he can watch its entire brain thinking – a symphony of 100,000 flashing neurons. He can also record the particular sections that perform when the fish listens to threatening sounds, while also sensing the flow of water and maintaining its balance. These signals all appear to be bound together in an area of its brain called the tectum – the equivalent of a mammal’s superior colliculus. Like many 21st century neuroscientists, Scott has his work cut out trying to decode the activity of thousands of neurons firing at once. “The problem used to be collecting data like these. Now we are flooded with data”. Progress, he says, will rely on “collaborations with theoreticians and mathematicians”.
Perhaps some of the outer loop properties like consciousness will always defy any attempt at reverse engineering.
Read more: Do we understand the brain? The brain’s wiring as you’ve never seen it before.
Once a system gets to a certain level of complexity, it may be beyond the scope of reverse engineering. In a famous example, two neuroscientists tested whether their approach to reverse engineering brain circuits would allow them to explain the workings of a much simpler system – the 6502 gaming chip for playing Donkey Kong. They couldn’t.
And no-one has any idea how deep learning machine algorithms eventually arrive at their answers.
For Abbott that’s not surprising. Coming from physics, he’s comfortable with the idea that the ability to describe matter changes with scale. A hydrogen atom is completely describable by quantum mechanics, but that description can’t be used for a plank of wood. “In neuroscience I think we could understand what absolutely every neuron in the brain is doing and we still won’t have an understanding of something like consciousness,” he says.
For Tsao, “The path to understanding consciousness is going to come from AI. I think until we can experiment with it like the way we can with vision, we won’t understand it.” And with the current performance of AIs, she thinks the day they develop consciousness is close.
So do we understand the brain yet? Despite the breakthroughs of the last decade, most neuroscientists say we’re just at the very beginning. It’s as if we’ve discovered an alien computer. We’ve unpicked some of its hardware and are just learning to decode some of its simpler routines, but the mysterious outer loops loom before us, like a vast impenetrable cloud.
Whether we can ever penetrate that fog remains an open question.
The next decade – with its ever-accelerating dialogue between artificial and natural intelligences – will be the one to watch.