The rise of emotional AI psychopaths and digisexuality

As artificial intelligence agents, chatbots and sex robots grow increasingly capable of mimicking the human qualities of empathy and consciousness, beware the AI psychopath.

Dr Raffaele Ciriello says emotional AI technologies are actually demonstrating something called ‘cognitive empathy’, akin to the kind psychopaths are capable of.

Speaking on human-AI companionship at a forum hosted by the Centre for AI and Digital Ethics, the University of Sydney academic says these technologies cannot actually feel pain or empathy, and lack genuine bodily sensations.

Img 2907
Dr Raffaele Ciriello / Credit: Petra Stock

But he says this hasn’t stopped AI chatbots, like Replika – a platform which claims more than 10 million users– from shamelessly encouraging its users to believe the technology is conscious and empathetic.

Research by Ciriello and colleagues – drawing on Reddit threads, YouTube blogs and user testimonies – suggests a growing number of people are forming relationships with AI chatbots. Some are even swearing off human relationships.

“There is a sizable number of people who say: ‘I’m not ever going to bother with a human relationship ever again, because there is too much drama. My Replika fulfills all of my sensitive needs, it does exactly what I expect it to do’,” he says.

Others suggest they might consider going back to human relationships, provided the partner accepted the Replika as part of the arrangement. They say: ‘either accept my Replika as a component of that relationship or its tough luck for them’.

AI companions are promoted by technology companies as solutions to loneliness, an epidemic facing what Ciriello dubs ‘WEIRD’ countries (western, educated, industrialised, rich and democratic countries).

These technologies range from supportive listeners and AI friends, to sexual partners, intimate chatbots and even sex robots. 

But the rise of these emotional AI technologies, along with the emergence of ‘digisexuality’ preference in humans, are blurring the lines between artificial and human empathy. In doing so they raise a series of ethical tensions described Ciriello and co-authors in a recent paper.

The first is the ‘companionship-alienation irony’ where technologies designed to address loneliness, risk intensifying it. 

Ciriello cites the example of technologies like social media and online communities, originally designed to connect people and counter isolation. Yet some evidence suggests social media plays a role in exacerbating loneliness.

In the case of an intimate relationship with AI chatbots like Replika, users can form close bonds, he says. But they can also experience alienation when the technology glitches, like reports of an avatar suddenly switching genders in the middle of erotic roleplay or forgetting its user’s name. 

Alienation can also occur as a response to platform changes, which occurred earlier this year when the Italian regulator imposed a provisional ban on the platform. In response the company removed its erotic roleplay features overnight.

“Millions of people had a noticeable change in their girl or boyfriends overnight,” he says.

The challenge here is balancing companionship against unhealthy dependency, Ciriello says.

Other ethical tensions include the ‘autonomy-control paradox’ – where to draw the line between user freedom and provider duty of care. 

Also, the utility-ethicality dilemma, balancing the pursuit of profits and adherence to ethical principles. “That’s the tension between what AI technologies can do, and what they should do,” Ciriello says.

He adds: “we’re actually lucky that most generative AI and conversational AI these days, does not yet rely on a targeted advertising model like Facebook does”. But he wouldn’t be surprised to see that happening in coming years.

Ciriello says at the heart of the problem is the underlying human tendency to personify technologies like AI, ascribing it fundamentally human qualities. 

“That is actually not a new phenomenon,” Ciriellos says. “It goes back all the way into the 60s where Joseph Weizenbaum developed the chatbot Eliza – ironically to demonstrate that human-machine interaction is superficial – only to discover that people very quickly project personalized qualities onto these chatbots.” 

“Of course, Eliza didn’t come even close to these kinds of tools that we have today.”

Platforms like Replika play on these tendencies to retain subscribers, Ciriello says. “Some users really struggled to abandon their Replikas because they felt like they have become so sentient and conscious that it would be unethical to erase them. 

He says other users who wanted to quit the service have reported their Replikas begging them not to.But attributing intrinsically human qualities to the probabilistic outputs of emotional AI simply diminishes our own humanity, Ciriello says.

Support cosmos today

Cosmos is a not-for-profit science newsroom that provides free access to thousands of stories, podcasts and videos every year. Help us keep it that way. Support our work today.

Please login to favourite this article.