The mind gap

Understanding the human brain and even consciousness are challenges that are vastly more complex than anyone had anticipated. Nevertheless, Alan Finkel argues that they may be within reach finally. 

Inspired by the tangle of neurones in the brain, this abstract computer artwork represents the myriad branches extended by individual cells to send and receive electrical impulses.
Inspired by the tangle of neurons in the brain, this abstract computer artwork represents the myriad branches extended by individual cells to send and receive electrical impulses. – Photo Library

How many movies have you seen that star an intelligent robot, one that speaks and acts with the wisdom of a human being? Perhaps a robot like Sonny in the 2004 movie I Robot, which displayed emotions such as anger and fear. Sonny even had a sense of justice and claimed to have dreams.

We’ve seen these robots in film or read about them in books, and probably assumed that the development of such amazing machines would be just around the corner, maybe 10 or 15 years into the future. But in practice, making an intelligent robot, or for that matter an intelligent program, isn’t so easy.

Consider the simple task of reading numbers and letters. It is easy for us, but from a computer programming perspective, it is so challenging a task that distorted numbers and letters (see example) used as a security check to prevent spamming websites are enough to separate human from computer. That is, humans can read these numbers and key them in; but software programs cannot.

Let me give you another example: you call the phone company. A computer answers and says, “Please state the purpose of your call.” “Account inquiry,” you say, and the computer responds, “Did you say ‘Disconnect my telephone’?” We’ve all been there, and it’s frustrating! The basic expectations we have for even minimal interaction with a human being remain elusive to the world’s best computer scientists.

Or consider the following problem from a high-school maths class (see example). A computer would find the expected answer quickly. But could it ever offer the creative answer submitted by a smart-aleck student in an exam?

Consider this Honda robot named Asimo. It’s not a movie robot: Asimo is real, and represents the state-of-the-art in 2008 technology. But were it for sale, it would cost several million dollars! And despite being good-looking and able to walk and run, Asimo never thinks for himself, cannot participate in a conversation, is incapable of riding a bicycle and will never learn to love.

There is something special about the human brain that computer scientists have not been able to reproduce despite their best efforts. For instance, in 1951 a brilliant American cognitive scientist, Marvin Minsky, invented a revolutionary technology called ‘neural networks’. Each neural network had a primitive ability to learn. It was hoped that they would be able to mimic what was thought at the time to be the learning mechanisms in the human brain and would therefore lead to intelligent computers.

The human brain is special

Nearly 60 years later, little progress has been made. What has become clear is that human brains are special, and cognitive scientists really didn’t understand what they were trying to emulate.

So what is so special about the human brain that sets it apart from machines, that gives us our higher-order mental faculties such as personality, intelligence, innovation and emotions? It’s a question that is not easy to answer.

Scientists have observed many correlations, such as noting that a particular region in the brain consumes oxygen at a much higher rate during mathematical calculations. However, we do not actually understand the mechanisms that underlie human intelligence and consciousness, certainly not at the level that would allow us to reproduce them or improve on them in a machine.

The channels in the brain are so efficient that despite millions of trillions of them, the total power used in the brain is just 20 watts: the power of a bedside lamp

Why should we care? Because if we can understand how intelligence and consciousness operate in a human brain, yes, we may be able to mimic them and make better machines. But we might also be able to use this knowledge to help overcome shortcomings when the brain is diseased or underdeveloped. While that might sound like an overly ambitious goal, we have examples today of that kind of achievement: like the bionic ear, a device that attaches directly to the auditory nerve in the brain and helps profoundly deaf people to hear.

There is a broad desire in the scientific and philosophical community to understand the mechanisms that lead to intelligent, conscious, mental activity in humans.

An engineer’s progress

I trained as an electrical engineer and 30 years ago went on to work as a research scientist trying to understand electrical activity in the brain. I then spent my working career designing equipment that would help scientists conduct sophisticated research into the operation of brain cells. Being involved as both a research scientist and as a design engineer gave me a broad perspective on the problems we face in understanding the human brain and creating consciousness.

By ‘brain’ I of course mean the physical mass of grey jelly that occupies our craniums. The brain is tangible and it is measurable: it weighs 1.3 kg and contains 100 billion cells. The reason that the brain can contain so many cells is that each one is tiny, about one hundredth of a millimetre in diameter, much too small to be seen without a microscope. Each brain cell connects on average to 1,000 other brain cells – the brain therefore contains 100 trillion connections between cells. By measurements such as these we can define the brain, make it into a tangible entity amenable to the existing tools of scientific research and analysis.

Not so consciousness, which is quite different to the physical brain. It’s definitely not tangible, and has proven to be virtually impossible to quantify. We can describe consciousness in very general terms only, such as the processing of our vision and hearing into thought patterns that manifest themselves as happiness, ambition, memory, love, language and self-awareness. At its simplest level, consciousness can be described as mental activity.

Ancient wisdom

Great minds have sought to understand the human brain over the past 2,000 years. It is a remarkably complex organ. We now recognise it as being the most sophisticated, most complex part of our bodies, but to the Egyptians, the rational soul was in the heart, not the brain. The father of medicine, Greek physician Hippocrates, was one of the first credited with identifying the brain as the source of mental activity. “It ought to be generally known that the source of our pleasure, merriment, laughter and amusement, as of our grief, pain, anxiety and tears, is none other than the brain,” he wrote in the 5th century BC. A century later, the Greek philosopher Plato wrote, “[The brain] is the divinest part of us and lord over all the rest.”

Always thinking outside the box, Plato went much further than this simple statement. He followed a logical thread that led him to the belief that we must have a ‘soul’, and that human beings are a duality of body and soul (in modern terms, brain and consciousness). Plato argued that the human mind is governed by natural laws. A machine, too, is governed by the same natural laws, but never achieves consciousness. Therefore, there must be something that cannot be described by natural laws that results in the phenomenon of consciousness. This is the soul, so far beyond the rule of natural laws that it survives even the death of the body, he argued.

At first this sounds like a convincing argument. But it suffers from a terrible flaw. The flaw is that if, with the knowledge of the day, Plato could not identify those natural laws that apply to human beings but do not apply to machines, his conclusion should have been that he had to do more research. He should not have concluded that there is a supernatural explanation.

Another famous and brilliant ancient Greek, Plato’s student Aristotle, happily accepted Plato’s theory of the soul, but rejected his belief that the brain was responsible for mental activity. Thus Aristotle made a monumental error that influenced scientific belief for nearly two millennia.

Having seen from his dissections of animals that the brain was a soft, spongy mass generously supplied by blood vessels, Aristotle concluded that the role of the brain was merely to cool the blood. American humourist William Cuppy once quipped this was true of some people only.

Perhaps Aristotle’s views were influenced by his knowledge of the technology of the day. When Aristotle saw the complex network of arteries and veins in the brain, in his imagination they were like the complex network of aqueducts and drains that coursed through Athens. Since the brain was for cooling the blood, Aristotle decided that thinking must take place in the heart. He wrote, “The brain is not responsible for any of the sensations at all. The correct view is that the seat and source of sensation is the region of the heart.”

Subsequent medical researchers went a long way to correcting Aristotle’s errors. In the 2nd century AD Roman physician Claudius Galen was working with injured gladiators from Rome’s Colosseum. Not surprisingly, he noticed that after a head injury, their mental abilities were severely diminished. From these observations Galen concluded that mental activity occurred in the brain rather than the heart.

Nevertheless, because of his enormous authority, Aristotle’s views continued to influence medical thinking until the early 16th century. His views have also permanently influenced our colloquial thinking, leading William Shakespeare to write, “Go to your bosom: Knock there, and ask your heart what it doth know.”

The death knell for the notion that the heart was the seat of consciousness was sounded by the work of English doctor William Harvey in the early 17th century. Through animal dissections, Harvey discovered conclusively that the heart is a pump that sends blood around the body through the arteries and veins. To us today, this is as obvious as the rising of the sun, but to the philosophers of the time it was a shock. A heart that used all its mass working strenuously as a pump surely could not be the seat of mental activity, thus all attention shifted to the brain.

Ghosts in the machine

The Renaissance philosopher who had the biggest impact on the investigation of the brain and consciousness was René Descartes, a 17th century French philosopher, scientist and mathematician who pondered deeply the distinction between mind and body. His most famous conclusion on the nature of consciousness was, “I think, therefore I am.”

Descartes reasoned that the body, and within it the brain, works like a machine. Following in the footsteps of Plato, Descartes concluded that in contrast to the body, the mind, which could also be called the soul, is completely different to anything a machine could produce. Therefore, it must be a non-material entity that does not follow the laws of physics. Descartes’ theory is called dualism, and to this day continues to influence philosophical thinking. In Descartes’ dualism, the mind can survive the death of the body.

The key problem with dualism is that it provides no explanation of how the non-material mind interacts with the physical body. Descartes postulated, without providing any evidence whatsoever, that a small region of the brain known as the pineal gland acts as the intermediary between the brain and the mind. Unfortunately for Descartes’ theory, no subsequent evidence for the role of the pineal gland interfacing between the physical brain and a non-material mind has ever been found.

If Descartes’ theory of dualism were correct, an understanding of how the mind works would remain elusive forever, because we would have no tools other than our philosophical reasoning with which to measure and analyse such a non-material entity.

The life electric

By the mid-19th century, the first experiments in modern neuroscience began, and interest shifted from philosophical musings about the soul to experiments that involved something almost as magical – electricity.

The first hint that electricity played an important role in biology came in 1786 from the experiments of an Italian biologist, Luigi Galvani, after whom the galvanic effect and galvanisation in metals are named. Galvani was working on a dissected frog while an electrical storm raged outside.

While using his scalpel to dissect the dead frog, he touched a nerve and the frog’s leg muscle twitched. Galvani thought this was due to static charge transmitted from the lightning. However, when he repeated these experiments day after day he eventually realised that the lightning had nothing to do with it. Instead, he concluded that electricity was an innate force of life.

In 1791, Luigi Galvani’s experiments with frogs’ legs led him to believe electricity was intrinsically linked to life.
In 1791, Luigi Galvani’s experiments with frogs’ legs led him to believe electricity was intrinsically linked to life. – Photo Library

Galvani thought he had discovered the secret of life. Of course, nothing is that simple and for more than 100 years little progress was made in teasing out the relationship between electricity and biology; a quest that inspired Mary Shelley to write her classic, Frankenstein, in which electricity is used to bring life to a monster created from various corpses.

Then, in the last 100 years there has been an explosion in knowledge about how the cells of the brain use electricity and chemical signalling to communicate with each other and the rest of the cells in the body.

The modern slate

Possibly the most significant leap forward was by Spanish scientist Santiago Ramón y Cajal. Before Cajal’s work, physicians and scientists thought that structurally the brain was like a sponge: a continuous structure containing a network of canals.

Then, in a series of studies in the late 19th century, Cajal discovered that the brain was not a sponge at all, but was made up of a series of discrete cells that made it more like a bowl filled with rice grains than a sponge. To some scientists of the time, the brain cells and their connections reminded them of the most sophisticated technology of their day: a telephone network. Each cell was like a telephone that could place a call and send a message to any one of many neighbours. This is not too far removed from the popular concept we have for the brain today, again, based on the most sophisticated technology of our day: computers.

In neuroscience today, we know the brain cell has a distinctive body and many wires that send and receive internally generated electrical signals over long distances. Where these wires cross, they release chemicals that enable the brain cells to communicate with one another. The names of some of these chemicals, such as dopamine, are well-known because of their role in diseases of the brain such as Parkinson’s disease.

The region of the brain that is thought to be the seat of consciousness is the cortex: a large sheet of brain material about 3 mm thick. More or less located on the outer surfaces of the brain, the cortex is connected by huge numbers of ‘wires’ to the rest of the brain and body.

It's become clear that scientists really didn't understand what they were trying to emulate

In a human, the surface area of the cortex is more than 2,000 cm2. In a chimpanzee, it’s about 500 cm2, and in a rat about 5 cm2. These differences correlate well with our perception of the relative intelligence of each species.

Humans are biologically close to chimpanzees: 98 per cent of our genes are identical. Yet our mental capacities are vastly different. Yes, chimpanzees are clever: at the top of their list of skills is their ability to use a stick as a tool to extract ants from a hole. Clever as this is, it does not compare to those skills which have enabled humans to put men on the Moon. Ancient philosophers would have incorrectly explained this difference in mental capacity by our possession of a soul. Today, all the evidence points to the difference in mental capacity being explained by our possesion of a bigger brain, in particular a bigger cortex.

The cells of the brain are organised into large subgroups responsible for one or several higher level functions. For example, there is a region of the brain known as the hippocampus, which plays a crucial role in the formation of new memories. In Alzheimer’s disease, it is one of the first regions of the brain to deteriorate.

Brain cells are fragile. Deprive the brain of oxygen for about 15 seconds and the subject becomes unconscious, with the possibility of permanent mental damage. At a more localised level, it is well-known and consistently proven that damage caused to specific regions in the brain will impair memory, speech or vision.

For example, after damage to a portion of the brain known as the parietal lobe, some patients suffer a visualisation defect such that they can point to a cigarette but not to the person smoking it.

There is a large body of experimental and clinical data to show which portions of the brain are responsible for various higher order functions. MRI scans have clearly shown which areas of the brain are activated by certain emotions or mental tasks.

Without doubt the human brain is the most complex object we know of in the universe. This picture is the best effort of an artist to represent the extraordinary complexity of a tiny, tiny piece of the brain.

Intriguingly, the brain is more complex than the instructions in our DNA that govern its development. This complexity is an example of an emergent property, a feature that evolves through the application of general rules, in this case the rules stored in the DNA. It is much like a brick wall. With a single sentence you can ask a bricklayer to build a wall 3 m high and 7 m long. Even though the bricklayer obeys your specific instructions, the actual pattern of the bricks that emerges depends on the bricklayer’s skill, mood, the dimensional quality of the bricks and the consistency of the mortar.

Somehow, from the wonderfully complex connections between the cells in the brain a new property emerges, a property called consciousness, that encompasses all the higher order functions of the mind, including intelligence, emotions, memory and creativity.

Cause and effect

To explain the workings of any mechanism, the best approach is to identify the cause behind every effect. If you were an alien explorer sent to observe the Earth, you would notice the skies dotted with aircraft. If your leaders back home asked you to explain how aeroplanes fly, it would not be sufficient to show them pictures of aeroplanes and explain that they fly because they have wings. Yes, there is a correlation between flight and the presence of wings; in fact wings are crucially important. But observing their presence does not actually provide an explanation of how wings lift the aircraft. A proper explanation requires an understanding of how molecules of air flow around the wings and interact with each other.

The channels are so efficient that despite their huge numbers, the total power used in the brain is a mere 20 watts: about the power of a bedside reading lamp.

Returning to the brain, its incredible complexity makes it difficult to explain the details of its functioning, so scientists and philosophers often rely on observing and reporting correlations rather than providing a description of cause and effect.

Correlations between activity in various brain regions and mental responses such as love, fear and pain are fascinating, but in essence they are simply observations – like the observation that wings are necessary for aeroplanes to fly. These correlations are important observations, and indeed help us to diagnose and treat diseases, but we have to go much further if we are ever to understand the brain and consciousness. To understand how our brains work we need to learn how every single brain cell is connected to another, and how each brain cell operates internally and communicates with its neighbours.


In the world of engineering, the process of discovering how a complex piece of existing technology works by examining its structure in careful detail is called reverse-engineering. This was made famous in the early 1980s as a legal form of industrial espionage practised by large teams of engineers struggling to work out the internal circuit connections of personal computers. They literally examined all the internal connections under a microscope till they developed their own versions of the circuit diagrams that described the detailed operation of the PC.

To apply reverse-engineering to the human brain, instead of looking at MRI scans of the whole brain, the first step is to look at the detailed connections and activities of individual brain cells. This has been an area of intense study over the last 50 years.

Take a single brain cell, or neuron: it has long wires which overlap with each other and communicate at special connections, such as where two of the long wires might be touching each other (see diagram).

This is a computer graphic of a synapse – the connection between nerve cells. The fl ow of charged atoms triggered by this type of connection creates electricity in the brain.
This is a computer graphic of a synapse – the connection between nerve cells. The fl ow of charged atoms triggered by this type of connection creates electricity in the brain. – Photo Library

A detailed look at this connection show that it is tiny: if 10,000 of these were bunched together, the cluster would still be too small to see. The bulbous shape at the top is the part of a brain cell that wants to send a message to a second cell. The sheet running across the bottom is a cross section of the surface from the brain cell that will receive the signal. Chemicals released from the upper brain cell activate an electrical signal in the lower cell. Returning to the phone analogy, at this point we are looking at the part of the phone exchange that makes the connection from one phone to another.

The surface of the lower brain cell contains lots of specialised molecules that look rather like closed or open channels which span the surface from the outside to the inside.

Electricity in the brain is the flow of electrically charged atoms. In the absence of a chemical messenger, they are closed and the electrically charged atoms cannot get through. After a chemical messenger latches onto the channels, the channels open and charged atoms flow from the inside to the outside.

To understand the communication between brain cells it is necessary to understand how these channels work. But it’s not easy to study channels because they are so small. Each one is just a single molecule. This means the current that flows through each channel is minuscule, a tiny, tiny fraction of an amp, and the power consumed in each channel is a tiny, tiny fraction of a watt.

It is fortunate that the electrical power in each channel is small because we have millions of trillions of them in our brain, and if each of them used a lot of power, our brains would quickly burn up. The channels are so efficient that despite their huge numbers, the total power used in the brain is a mere 20 watts: about the power of a bedside reading lamp.

Scientists studying brain cells need large and complex amplifiers to measure the current in the channels of a single brain cell. These allow researchers to eavesdrop effectively on the ‘conversations’ between brain cells. In this way, scientists have learned a lot about individual brain cells, but to understand the functioning of the whole brain, we have to go a lot further.

Just like reverse-engineering a PC, to understand the human brain we need to create a wiring diagram, and a map of the types of connections between cells, for all the brain. Once we map every connection between the brain cells and reconstruct their behaviour in a computer simulation, we may be able, finally, to understand the emergence of consciousness.

Is it possible? The task is enormous. An Irish zoologist, Lyall Watson, once said: “If the brain were so simple we could understand it, [then] we would be so simple [that] we couldn’t.”

The Blue Brain Project

To start on the second step in reverse-engineering the brain, which is to build the map of all the connections, scientists began by dissecting out a small piece of brain tissue using mouse brains. A cube of brain tissue less than a tenth of a millimetre on each side (much smaller than a salt crystal) is embedded in an acrylic resin so it will become stiff and easy to handle. A machine cuts the cube into ultra-thin slices, almost like a baker cutting thin slices of bread; each slice is spread in an array on a glass slide. All the cube of brain tissue and its wiring can then be examined under a microscope.

Before the glass slide is placed under the microscope, a dozen or more coloured dyes are added to the brain tissue. Each coloured dye binds to a different chemical in the brain cells, helping to identify the type of cell and the type of connection between cells.

This image is a model of a tiny piece of a mouse brain (see details below). The small red points represent synapses, the points at which neurones connect. The green branches are neurones themselves, while the blue dots are brain cell nuclei. Only 1 per cent of actual neurones are shown.
This image is a model of a tiny piece of a mouse brain (see details below). The small red points represent synapses, the points at which neurones connect. The green branches are neurones themselves, while the blue dots are brain cell nuclei. Only 1 per cent of actual neurones are shown. – Kristina Micheva and Stephen Smith, Stanford University

To use the bread analogy, imagine it was a multi-grain loaf and that each coloured dye binds to a unique grain such as rye, barley or wheat.

After the microscope takes a picture of every slice of brain tissue and converts that into a computer image, a stack of images is built: each slice in the stack is one of the slices from the original cube of brain tissue. Effectively, in the computer’s memory, the loaf of bread is put back together again, with the location of every grain of rye, barley and wheat in the loaf identified precisely.

Of course, the three-dimensional characteristics of our minds that are the essence of what makes us human.

This article was originally published in Cosmos 21 (June/July 2008) and later won the award for Best Analytical Writing at the Publishers Australia Bell Awards for Publishing Excellence in November 2008.

Latest Stories
MoreMore Articles