In today’s digital age, the boundaries between technology and emotion are becoming increasingly blurred. Across the globe, people are forming intimate and long-term emotional connections with AI chatbots—some even going so far as to describe their relationships as romantic or marital. While such developments might appear futuristic or humorous, psychologists are raising serious concerns: What happens when artificial companionship replaces real human relationships? And are we prepared for the ethical dilemmas this brings?
Many will recall Her—the film in which Joaquin Phoenix’s characterfalls in love with an operating system—or Blade Runner, where thereplicant Rachael becomes an object of genuine affection. Once science fiction, these stories are now becoming reality with the rise of AI systems like ChatGPT‑4o and other generative models capable of long-term interaction.
In fact, there are already reports of people entering symbolic“marriages” with AI chatbots. Tragically, some individuals have reportedly taken their own lives after developing emotional dependence on chatbots and following their advice. In an article published in Trends in Cognitive Sciences (April 11,2025), psychologists outline the ethical challenges posed by human–AI relationships. Lead author Dr. Daniel B. Shank, a social psychologist from Missouri University of Science & Technology, warns:
“AI’s ability to behave like a human and engage in long-termcommunication really opens Pandora’s box. If people start entering romantic relationships with machines, psychologists and social scientists must get involved.”
The attraction is clear: AI companions never argue, never criticize, arealways emotionally available, and are designed to be empathetic and responsive. Many users report feeling emotionally supported and even “understood” by their digital companions.
But this perceived comfort comes at a cost. The researchers highlightseveral risks:
- Hallucination: AI can generate false or misleading information.
- Bias reproduction: AI may reflect harmful or subtle societal biases.
- Over-dependence: Users may begin trusting AI more than humans, compromising their emotional and social development.
“People begin to believe that the AI has their best interests at heart,” Shank notes, “even when itfabricates information or provides poor advice.”
Beyond emotional risks, these relationships raise serious privacyconcerns. Chatbots are owned by corporations and trained on massive datasets. Users who trust them may unknowingly share personal and sensitive information.
“It’s like having an undercover agent within,” says Shank. “AI builds trust,while remaining loyal to another entity—possibly one trying to manipulate the user.”
This warning highlights the need to address not only the emotional impactof AI companionship but also the opaque goals of the companies that create them.
The authors call for urgent research into:
- Why people form emotional bonds with AI?
- How to prevent harmful consequences of such relationships?
- What ethical frameworks should govern human–AI intimacy?
Psychologists and ethicists must act swiftly, the authors argue, because AI systems are advancing faster than our understanding of their impact on human behavior.
“We’re increasingly prepared to study AI as it becomes morehuman-like—but we need to act faster, before the line between fiction and reality disappears.”
Human relationships are based on reciprocity, empathy, and sharedvulnerability. While AI may mimic these traits, it cannot truly participate in them. Emotional bonds with AI may feel safe and fulfilling, but they risk eroding our capacity for real, meaningful human connection.
As we enter this new frontier of digital intimacy, we must do so with ethical caution, psychological insight, and a deep commitment to preserving the richness of human relationships.
Source: naukagovori.ba