Is your chatbot your friend? How we’re forming emotional bonds with AI

From AI tutors helping students cram for exams to chatbots offering a sympathetic ear, our interactions with artificial intelligence are becoming increasingly personal — even emotional. But what happens when people start treating AI like a confidant, caregiver or companion?

In a new study published in Current Psychology, researchers from Waseda University in Japan explore just that. Drawing on “attachment theory” — a psychological framework that explains how humans form emotional bonds — the team examined how people relate to AI systems such as generative chatbots.

“As researchers in attachment and social psychology, we have long been interested in how people form emotional bonds,” says research associate Fan Yang, a PhD student in psychology. “In recent years, generative AI such as ChatGPT has become increasingly stronger and wiser, offering not only informational support but also a sense of security.

“These characteristics resemble what attachment theory describes as the basis for forming secure relationships. As people begin to interact with AI not just for problem-solving or learning, but also for emotional support and companionship, their emotional connection or security experience with AI demands attention,” he says.

To investigate, the researchers conducted two pilot studies followed by a formal study involving 265 participants. The pilot studies informed the development of the “Experiences in Human-AI Relationships Scale” (EHARS) — a self-report tool designed to measure attachment-related tendencies toward AI, such as seeking comfort, reassurance, or guidance from these systems. In the formal study, participants completed an online questionnaire to test the EHARS scale and evaluate how people emotionally relate to AI.

The findings suggest that people don’t just use AI for problem-solving — they may also turn to it for comfort, reassurance and emotional support.

Nearly three-quarters of participants sought advice from AI, and around 39% perceived it as a constant, dependable presence in their lives.

This, the researchers argue, has implications for how we design and regulate emotionally intelligent AI. The researchers also stress a need for transparency in AI systems that simulate emotional relationships, such as romantic AI apps or caregiver robots, to prevent emotional overdependence or manipulation.

Of course our personal relationship with AI isn’t new. In the 1960s, a program called ELIZA mimicked a psychotherapist by giving scripted responses to users describing their feelings. While it had no real understanding of the interaction, it paved the way for AI’s role in emotional care. Since then, the field has advanced dramatically. Low-cost, confidential, and judgment-free, AI therapy has gained traction as an accessible form of emotional support. 

At UNSW’s felt Experience and Empathy Lab (fEEL), researchers are developing an AI companion called Viv to support people living with dementia.

“We can take Viv into an aged care space where she can talk to people who have dementia – who may or may not want to talk about it,” says lead researcher Dr Gail Kenning. “The important thing is she can be a companion who supports social isolation and loneliness.”

But Kenning cautions that while AI characters like Viv can help, they’re not a replacement for human relationships.

“That’s what we all want in our lives, human-to-human connection,” she says. “The issue for many people is that it’s not always there, and when it’s not there, AI characters can fill a gap.”

Managing AI chatbots

Sign up to our weekly newsletter

Please login to favourite this article.