Like tears in the rain, will sentient AI destroy us?

Cosmos Magazine

Cosmos

Cosmos is a quarterly science magazine. We aim to inspire curiosity in ‘The Science of Everything’ and make the world of science accessible to everyone.

By Cosmos

Could our future world resemble a scene from Blade Runner? Feelings are running deep over the emergence of artificial intelligence and how it could impact humanity, as Joshua Gliddon writes.

From Pinocchio through to Frankenstein’s monster, and more recently the replicants in Blade Runner and the sentient AI in the movie Her, humans have long been fascinated with the idea of creating machines that can think, feel and respond just as we do. We’re also fascinated with the implications of those creations – will they keep us company, supplant us or even try to eliminate us altogether? Should those machines have rights? And if it’s possible to make sentient machines, what does this mean for us being human?

Whether or not AI will become sentient is a wide-open debate, and there’s real tension between what the technologists and futurists think, and the beliefs of those working in the fields of philosophy and theories of consciousness. What we do know, however, is the harms posed by AI – sentient or not – aren’t just theoretical. AI might not be sentient right now, but it is harming humans today.

Tech vs philosophy

The tension between the tech and futurist crowd and the philosophy people on sentience comes down to one idea: emergence. Tech people broadly believe sentience is an emergent phenomenon; that is, throw enough resources in terms of time, money and compute power at the problem, and sentience must emerge within the system.

It’s a position held by Australian futurist Ross Dawson, who argues it’s likely we’ll see sentient AI systems in the not-too-distant future.

“If you look at theories of consciousness, then sentience is an emergent phenomenon,” he says. “We know that we’ve basically got a bunch of brain cells and there’s nothing there which we can observe in terms of the functioning of the brain or the body that points to what consciousness is or how it emerges, but it does emerge.             

“So, I think you can’t say that it’s impossible to create a system out of which consciousness emerges which is not based on human cells.”


There’s no reason, Dawson adds, that we can’t achieve something which we would describe as having a sense of self.

Headshot of ross dawson, male, short brown hair.
Ross Dawson is a futurist and entrepreneur, and founder of the Advanced Human Technologies Group.

Philosophers like Monash University’s Professor of Philosophy, Robert Sparrow, disagree. Sparrow, who specialises in areas including human enhancement, robotics and weapons, notes there’s too much going on with biological sentience to automatically ascribe this ability to a machine simply because it’s a good mimic.

“When you talk to someone who deals with the human mind – people like psychiatrists, psychologists and counsellors – and ask them how much we understand about minds, and the answer is nothing at all,” he says. “We just don’t know where consciousness comes from, how it works, or what its relationship is to the brain.”

Head and shoulders shot of robert sparrow. Male with grey hair and goatee, wearing black rectangular spectacles and a grey suit jacket.
Robert Sparrow is a Professor of Philosophy at Monash University’s Data Futures Institute.

There are many theories of consciousness, and one recently emerged idea is consciousness is a quantum phenomenon. If it’s quantum, and linked to the biology of the brain, then it’s unlikely sentience will emerge in AI, but at this stage the quantum nature of consciousness is still a theory. Given how little we know about the quantum world, it’s unlikely proof or otherwise of this theory will emerge anytime soon.

Not all tech people are lined up on the side of the likelihood of sentient AI, either. Professor Flora Salim, from UNSW’s School of Computer Science and Engineering, says it may be theoretically possible to make sentient AI, but the key is the fact it would remain artificial.

“It could seemingly be anthropomorphised as sentient, but it’s not really because all it’s doing is making deductions and inferences on its training data,” she notes. “But none of that means it’s capable of being self-aware and conscious of self.”

As Sparrow says, sentience doesn’t require high degrees of intelligence – sentience is simply the capacity to feel, something most living creatures are capable of. And it’s unclear whether machines will ever be able to feel or have a sense of self.

Embodiment – would you kill a machine?

Think about it – would you kill a machine? If sentient AI is developed, then this is a real ethical issue. If a sentient AI is switched off at the wall, would this end its life? Or does the fact the AI isn’t bound to a body, and can replicate itself any number of times make this question moot?

Sparrow has developed a concept he’s dubbed The Turing Triage Test as a way of understanding the possible ethical dilemma of whether a machine is sentient. It posits a scenario where there’s a human patient in ICU at a hospital, and an AI. The power goes down, and the backup is only sufficient to keep either the human patient alive or maintain the AI.

This creates a moral dilemma, but if someone isn’t willing to turn off the human’s life support to save the machine, then it’s apparent we don’t believe the machine is sentient.

He also says AI not having bodies is another hurdle in machine sentience.  Humans understand other creatures, from dolphins to birds, and dogs to rats, are sentient because they have bodies. Having a body lets us see how they react to stimuli, and it’s also us having bodies that allows us to see sentience in our fellow humans. Poke someone and we’ll see them flinch.

It’s the embodiment problem that really stands in the way of us recognising sentient AI, because we can’t see how it reacts. Sure, ask it if it’s in pain, or scared about the future and have it reply in the affirmative is one thing, but we won’t ever know if that filing cabinet in the corner is really feeling pain or fear, because we can’t see it.

“If I was to put AI into something that looked like a filing cabinet, and then I showed you the cabinet and said, ‘by the way, that’s a thousand times more intelligent than you, it’s more perceptive and feels more pain than you,’ you would have absolutely no way of engaging with those claims,” Sparrow says.

“And this is why I don’t think an AI can be sentient, because they don’t have bodies of the sort we can recognise as having feelings.”

Maybe not sentient AI, but superintelligence?

OpenAI, developers of the ChatGPT AI chatbot, and its controversial CEO, Sam Altman, have long talked about the company’s goal being the creation of Artificial General Intelligence, or AGI. Last year Altman said AGI was just “thousands of days” away, or sometime within the next decade.

More recently, there’s been a terminology shift at OpenAI and in the broader industry. AGI is out, and the new term, superintelligence, is in. Superintelligence is generally regarded as an AI able to solve problems, react to external inputs and come up with seemingly novel works at a level beyond human capability.

There’s an important difference between superintelligence and something we might think of as being sentient, however, says Dawson. “Sentience is the ability to have a sense of self. All superintelligence is, is just extremely complex problem solving.”

With superintelligence, no matter how capable it is, there’s no ‘there, there’, no deus ex machina. It’s just a machine really good at crunching numbers and making inferences. It may convincingly mimic sentience, but for that to happen, humans must first anthropomorphise the AI and its outputs.

Many researchers, including Salim, believe OpenAI’s Altman is being optimistic with his several thousand days superintelligence breakthrough prediction, saying there are several reasons for this.

The first is current AI large language models (LLMs) like ChatGPT have essentially exhausted mining the open web for model training data. AI companies are turning to licensing agreements with publishers and other proprietary data owners to deepen the pool of training data, but the reality is there’s only so much data out there, and so the pace of innovation in the current crop of AI models is slowing.

There are also problems with the underlying models and how they learn. “The way these models are being trained today, it’s very much about learning for associations and correlation of what was important in the past,” Salim says.

“It doesn’t do well in understanding new knowledge or how to reason. It doesn’t learn the way a baby learns, so unless there’s a breakthrough in machine learning, simply adding data won’t work anymore.”

That’s not to say superintelligence doesn’t exist – it does. But current superintelligence is narrow in scope, not the broad, general-purpose superintelligence envisioned by OpenAI’s Altman and others.

US computer scientist Meredith Ringel Morris and her colleagues at Google developed a way of thinking about AI and intelligence by dividing it into six distinct categories, from level zero, with no AI, such as a pocket calculator, through to level five, which is superhuman AI.

Head and shoulders shot of flora salim. A woman with dark straight hair, wearing a black jacket.
Flora Salim is a Professor of Engineering at the University of New South Wales and the Deputy Director (Engagement) of UNSW AI Institute.

According to Morris, narrow application level five superintelligence applications, such as AlphaFold, which uses machine learning to predict the structure of protein molecules and earned its creators the Nobel Prize in Chemistry last year, already exist.

General AI tools like ChatGPT are far less capable than their narrow counterparts, being categorised by Morris as level one, or ‘emerging’, meaning they’re equal to or somewhat better than an unskilled human.

Or, to put it in perspective, ChatGPT may seem amazing, but in terms of its actual intelligence, it’s only one step above a pocket calculator. “We’ll need real scientific breakthroughs to get to superintelligence, let alone sentient machines,” says Salim. “Devising AI models’ capabilities to acquire human-level reasoning and open-ended learning and discovery is particularly essential to get us to the next step.”

AI harms are not theoretical

AI doesn’t need sentience to pose a threat to humans and our society. Nor is superintelligence required; AI, as primitive as it is today, according to Morris’s taxonomy, is already causing harms. The risk is only going to grow as AI improves, and much of that risk concerns dangers to our social structures and relationships.

Robert Brooks, Scientia Professor of Evolution at the University of New South Wales, says AI will probably affect human evolution and as a result, human brains will get smaller. “Things like individual intelligence, memory, language and social processing that are pushing for bigger brains is probably being relieved a bit because we have machines to externalise that,” he says.

It could be this reduction in brain size due to outsourcing some of its functions means we’re ultimately smarter for navigating the new world because of what our brains aren’t doing. It also could mean a significant change in social relationships and what it means to be human, Brooks says.

As we evolved and became social, our brains became larger and language capacity improved, making us even better at being social in a ‘virtuous cycle’. But what if that gets disrupted or completely replaced and AI does all the remembering, and we lost that capacity?

“If our brains didn’t need to do that anymore and lost their capacity to ever learn how to do that entirely, not only would you have a breakdown of the culture, but you might have a breakdown of the hardware underpinning that culture,” Brooks says. “I don’t know if it’s going to happen, but it’s conceivable.”

Head and shoulders shot of robert brooks. Male with short dark hair wearing a blue and white floral shirt.
Robert Brooks is the Scientia Professor of Evolution at the University of New South Wales.
Abstract painting of human and ai robot communicating
Credit: DigitalVision Vectors / Getty images / stellalevi

We’ll make great pets

Superintelligent AI could also change our society and humanity by enslaving us or, at best, keeping us as pets, argues Sparrow in his 2022 paper Friendly AI will still be our master. Or, why we should not want to be the pets of super‑intelligent computers.

Sparrow draws on the neo-republican philosophy in his paper, which holds that freedom requires equality. If superintelligent machines emerge, even assuming they were benevolent towards us, then our relationship with them would be, to paraphrase computer scientist Marvin Minsky, the same as between pets and humans, in this instance the human being the pet.

Where the republican tradition feeds into this is the relationship between pet and owner is never one of equality, and the same goes for the possible relationship between people and AI superintelligence.

“Benevolence is not enough,” says Sparrow. “As long as AI has the power to interfere in humanity’s choices, and the capacity to do so without reference to our interests, then it will dominate us and thereby render us unfree.

“The pets of kind owners are still pets, which is not a status which humanity should embrace. If we really think that there is a risk that research on AI will lead to the emergence of a superintelligence, then we need to think again about the wisdom of researching AI at all.”

“If we really think that there is a risk that research on AI will lead to the emergence of a superintelligence, then we need to think again about the wisdom of researching AI at all.”

Much of the fear about what AI may be capable of in the future, and its impacts on humanity, including the narrative AI could destroy us, is purely theoretical and fearmongering, says Associate Professor, Philosophy, Samuel Baron, from the University of Melbourne.

Baron has interests in metaphysics and the philosophy of science and mathematics. He is also the convenor for AI research at the university.

His concern is AI is doing real harms today, and arguments about AI annihilation and being enslaved are narratives pushed by the large tech companies to hide the impact AI is having now.

“We’re running machine learning algorithms on criminal recidivism prediction, on loan prediction, like loan and credit scoring prediction, on medical diagnosis, on fraud detection and prosecution, on policing, all of these things we’re currently using algorithms for, and all of them are producing harms,” he argues.

Head and shoulders shot of sam baron. Male with plaited hair and beard. Wearing glasses and a black t-shirt.
Sam Baron is associate professor of philosophy at the University of Melbourne and convenor for AI research.

“People aren’t talking about that so much because they’re talking about this possible situation in which these things rise up and kill us. And the cynical view that I have is that tech companies are purposely pulling our focus away from what is the real harms of these things.”

What it comes down to, says Salim, is how we go about building safe AI and safe superintelligence. Regardless of whether OpenAI’s Altman is correct, and superintelligence is a matter of thousands of days away or if it’s further out, safety is something we need to be thinking about and having conversations about now, she says.

“Innovation must go hand-in-hand with responsible AI,” says Salim. “Innovation can improve the guardrails we put in place, but the funding needs to be there. And in Australia, we’re just not putting the funding in place, ranking in the bottom two in the OECD in terms of AI innovation. It’s shameful.”

Will the AI Pinocchio kill the human Geppetto? Or will the puppet just turn master? As Brooks puts it, “predicting the future is a mug’s game”. What we do know is AI is creating harms today, and at some point, in the future, superintelligence will arise. As individuals and a society, we must be thinking about these things now, before it’s too late.

Please login to favourite this article.