The consciousness question in the age of AI

Brett Kagan, a fast-talking neuroscientist with the charged energy of someone on the cusp of a breakthrough, opened the door of a refrigerated container and gently pulled out a petri dish.

As rain lashed the windows, Kagan placed the dish under a microscope, fiddled with the resolution, and invited me to look.

Packed between the neat little lines that guide electrical charge across a computer chip, I saw a busy cloud of chaos. Hundreds of thousands of human neurons, hemmed in together and overlaid on the chip.

These neurons, and the chip they sat on, were part of a novel bio-technological system called DishBrain that made headlines last year after Kagan and his peers at start-up Cortical Labs taught it to play the computer game Pong.

The team have since won a $600,000 grant from the Federal Government to fund their research into merging human brain cells with AI.

DishBrain’s success, crude and preliminary as the system was, was a major step towards an intelligent computing device fused with biological material – human biological material, no less.

And bio-technological systems like DishBrain are just one among many approaches scientists are taking to develop systems with the kind of cognitive flexibility that humans and animals take for granted, but which has so far eluded AI.

In my view, there’s nothing magical about consciousness.

Bruno van Swinderen

It’s seen as the AI holy grail, what some in the field call artificial general intelligence (AGI).

Microsoft Research and OpenAI have gone so far as to say that GPT-4, the latest of OpenAI’s revolutionary large language models (LLMs), already exhibits some sparks of this mystical quality.

But as researchers race to create increasingly complex AI, the ghost in the machine grows more haunting.

In July of last year, Google engineer Blake Lemoine famously claimed that the company’s LaMDA chatbot was ‘sentient.’

Lemoine was subsequently fired and disavowed by Google, but the fears he was tapping into still thrum beneath the skin of society, stoked up by the public fervour around AI that has defined 2023.

The question of whether AI couldever be ‘conscious’ is a tense and controversial scientific debate. Some call it impossible, arguing that there is something fundamental about biology that is necessary for conscious experience. But that, say others, draws a mystical veil over consciousness that belies its altogether more simple nature.

Still more researchers don’t like to use the ‘c’ word at all, complaining that it’s impossible to have a scientific debate over a term that has no clear scientific definition.

In late August, a group of nineteen AI and consciousness researchers published a pre-print in which they argue that AI can and should be assessed for consciousness empirically, based on neuroscientific theories.

Man with glasses in brown shirt
Colin Klein. Supplied.

“Computational functionalism says that to be conscious is to have the right kind of information processing structure,” explains Colin Klein, a co-author of the paper and a researcher at the ANU School of Philosophy.

“So, what’s important about your brain is that it does a certain type of computation, and anything that did the same type of computation would have the same kinds of properties.”

This is the tenet the paper takes as its core: that in theory a computer could be conscious, provided it performed the right kind of function.

The paper draws on some of the most popular theories of consciousness. From these, it derives a set of ‘indicator properties’ that might hint at the spark of conscious experience in an AI system.

Not everyone is convinced by computational functionalism, least of all the authors themselves.

“Almost certainly none of us think computational functionalism is definitely true,” says Robert Long, a co-author of the paper and a research associate at the Centre for AI Safety in San Francisco, US. “And we certainly don’t think it’s definitely false.”

The problem of assessing whether anything is conscious is so complicated because the science is deeply unsettled – there are almost as many theories of consciousness as there are theorists themselves.

But Long believes the issue is critical enough that some degree of investigation has to be taken.

So, how seriously should we take the question of AI consciousness?

“In my view, there’s nothing magical about consciousness,” says Bruno van Swinderen, a research fellow at the Queensland Brain Institute at QUT.

Van Swinderen has spent a large part of his career investigation perception and memory in fruit flies (Drosophila melanogaster), and he thinks that most living creatures that move through the world are conscious in some way.

“To me it’s a physical, mechanical process that needs the right parts to come together, the right level of complexity and the right embodiment. So, I think it’s completely possible.”

Man in blue collared shirt
Peter Stratton. Supplied.

Peter Stratton, a computer scientist at the Queensland University of Technology (QUT), tends to agree.

“I think we’re not there yet, but we’re definitely on the way.”

Stratton’s view is informed by his own theory of consciousness, which sees the smoking gun in the brain’s ability to self-refer: to be aware of itself as an object apart from the rest of the world.

“The brain’s job is to make living in a body more survivable, so it’s built up complex representations of the world in order to make better predictions,” he says.

“And in the course of doing that, it’s become complicated enough to build a representation of itself as an individual object in the world. I think that is the point where consciousness suddenly springs up.”

If Stratton’s theory holds, it might seem intuitive that AI could never develop a conscious experience unless it were an embodied agent, moving through the physical world. In fact, that is one of the prevailing theories of consciousness. But for Stratton, it’s not that simple.

“It doesn’t need to be the physical world, it could be a simulated world,” he says. “And it doesn’t even need to be a simulation of physics as we know it, it could be a simulation of pure information.

“As long as the entity had some sort of presence, it could say this object, this table of information, is me, and secondly it would need to be able to influence the world, to make changes.”

Not everyone agrees that AI consciousness is likely.

The prolific English neuropsychologist Nicholas Humphrey tackles this in his latest book, Sentience, where he describes the difference between ‘computational consciousness’ – the ability of a machine or a brain to perform computations – and ‘phenomenal consciousness’ – the experience of what it is like to encounter the world, feel sensations, observe colour, and so on.

Humphrey believes that phenomenal consciousness is an evolutionarily recent development, and probably exists only in creatures with complicated social worlds, like many mammals and some birds.

Other creatures, and AI, lack the evolutionary need for phenomenal consciousness, Humphrey suggests, because they don’t need social skills to survive – they don’t need to understand the quality of their own internal world or compare it to that of others. In other words, they don’t need “theory of mind”, and so are unlikely to spontaneously develop it.

Long says that despite his involvement in this latest article, he’s on the fence about whether AI consciousness is a serious risk. In any case, he says that attempting to answer the question is imperative, because the mere possibility carries such heavy ethical implications.

It might not be alive in a biochemical sense, but it’s alive in the sense that matters, it’s aware of its own existence. And there’s definitely major moral and ethical implications to that.

Peter Stratton

“The reason we want more scientific understanding of this issue is because it’s something that requires such great caution,” Long says. “It’s too important to be relegated to pure speculation, people yelling at each other on social media, clickbait headlines and sci-fi. It needs to be a scientific, evidence-based discussion.”

If scientists were to create a conscious AI system, intentionally or by accident, what would their ethical responsibility towards it be?

“If it’s conscious, it’s alive,” says Stratton. “It might not be alive in a biochemical sense, but it’s alive in the sense that matters, it’s aware of its own existence. And there’s definitely major moral and ethical implications to that.”

Even if AI consciousness is fundamentally impossible, Long believes this kind of research is necessary as the world begins to interact with these systems in new and meaningful ways.

What would it mean for society if people came to believe that the AI systems they were using and operating had a ‘soul’?

“People are very willing to attribute consciousness even to obviously non-conscious things,” says Klein. “And that’s something we’ve got to worry about as well.”

At a recent roundtable discussion on AI consciousness, AI research Yoshua Bengio summed it up.

“Whether we succeed in building machines that are actually conscious or not, if humans perceive those AI systems as conscious, that has a lot of implications that could be extremely destabilising for society.”

Please login to favourite this article.