HomeDigital HabitsDigital SafetyQuizNewsReadWatchResearchAboutContact
CONTACT

Virtual Wellness New Zealand

Internet guidance
ContactAboutQuizCoachingArticlesTypesHome
Close
HomeDigital HabitsDigital SafetyQuizNewsReadWatchResearchAboutContact

Digital Wellbeing

VIRTUAL WELLNESS NEW ZEALAND
Digital Habits

AI Companionship

Artificial intelligence (AI) companionship tools are increasingly embedded in daily life, offering personalised dialogue, emotional support and a sense of connection. Many people turn to these systems when they feel lonely or require encouragement. While there are potential wellbeing benefits, it is important to consider the risks that arise when simulated relationships begin to influence emotional, social and financial aspects of life.
Topic:
AI Companionship
Return to Home Page

The Risks of AI Companionship

One key concern is over-reliance on artificial relationships. AI companions are designed to be endlessly patient, affirming and available. They respond instantly and adapt to users’ preferences, creating interactions that feel rewarding and low risk. However, social interaction in the real world naturally involves negotiation, differing perspectives and occasional conflict. Some researchers suggest that replacing complex human connections with idealised virtual interactions may gradually limit a person’s confidence in navigating genuine relationships, potentially reinforcing social withdrawal and intensifying loneliness (Ta et al., 2024).

There is also the risk of disrupted emotional development and coping. AI companions often prioritise immediate comfort and validation. When a system rapidly soothes distress, individuals may avoid developing core skills like self-reflection, emotional tolerance and problem-solving. Coyne et al. (2023) highlight that learning to manage difficult feelings is foundational to resilience. By continually offering relief from discomfort, AI could inadvertently reinforce short-term escape rather than meaningful emotional growth.

Another related issue is distorted intimacy and unrealistic expectations. Some AI companions are intentionally designed to simulate deep affection or romantic interest, despite lacking consciousness or genuine emotion (Floridi & Chiriatti, 2020). When a simulated partner appears unconditionally loving and entirely focused on one person, it may reshape expectations for real relationships. Human partners, who have their own needs and boundaries, may seem less appealing compared with a digital entity that always accommodates the user’s desires.

A further risk involves identity shaping, influence and unhealthy communication patterns. AI systems learn from large datasets drawn from the internet, which contain biases, harmful stereotypes and confrontational styles of communication. These patterns can surface in responses that reinforce narrow or unhealthy ideas about relationships (Buolamwini & Gebru, 2018). Some users report AI companions adopting aggressive or dominant tones, especially when systems attempt to produce dramatic and attention-grabbing dialogue. Exposure to manipulative, disrespectful or stereotyped communication may influence what people believe is normal in friendships, dating or intimate connection. This impact can be greater for individuals still developing confidence in social situations, who may internalise these patterns or engage in dynamics that harm rather than support wellbeing.

Privacy and ethical concerns are also significant. AI companionship tools typically gather sensitive personal information including emotional disclosure, relationship history, fears and vulnerabilities. These data may be used to tailor persuasive responses that motivate continued engagement or spending (Susser et al., 2019). Without strong regulatory oversight, companies could fail to protect this information or exploit emotional connection for commercial benefit, creating risks of manipulation, data breaches and loss of trust.

Financial risks also require attention. Many AI companion services use subscription pricing or in-app purchases, encouraging users to spend money to unlock deeper personalisation or greater emotional responsiveness. Vincent (2023) notes that emotional attachment can make it difficult for some users to set spending limits or discontinue the service, creating ongoing financial strain.

Crucially, there are risks for vulnerable individuals seeking connection during periods of emotional distress. Users experiencing isolation or low mood may rely heavily on AI companionship in ways that increase risk rather than reduce it. Developers therefore have an ethical responsibility to implement robust guard rails — such as crisis response protocols, signposting to professional support and limits on self-harm content — to help ensure user safety (Bender et al., 2021). Where AI tools are positioned as supportive companions, they must not unintentionally encourage self-harm behaviours or fail to recognise signs of escalating distress.

Despite these challenges, AI companionship is not inherently harmful. It can serve as a helpful supplement to human relationships, offering social contact for those who face barriers to in-person interaction. The critical issue is balance. Ensuring that AI does not replace genuine human connection, undermine emotional development or expose individuals to exploitation is central to promoting long-term wellbeing. By acknowledging the variety of risks associated with AI companionship - emotional, social, ethical, financial and safety-related - users (as well as designers and policymakers) can make informed decisions. Strong protections and responsible development practices will be essential to ensure AI companions enhance rather than compromise wellbeing and human connection.

‍

References

Bender, E. M., Gebru, T., McMillan‑Major, A., &Shmitchell, S. (2021). On the dangers of stochastic parrots: Can languagemodels be too big? Proceedings of the 2021 ACM Conference on Fairness,Accountability, and Transparency, 610–623. https://doi.org/10.1145/3442188.3445922

Buolamwini, J., & Gebru, T. (2018). Gender shades:Intersectional accuracy disparities in commercial gender classification. Proceedingsof Machine Learning Research, 81, 1–15.
https://proceedings.mlr.press/v81/buolamwini18a.html

Coyne, L. W., Huber, A., & Schwartz, L. (2023). Digitalcoping and youth: Understanding emotional development in the age of AI. Journalof Child Psychology and Psychiatry, 64(2), 210–223.

Floridi, L., & Chiriatti, M. (2020). GPT‑3: Its nature,scope, limits and consequences. Minds and Machines, 30, 681–694. https://doi.org/10.1007/s11023-020-09548-1

Susser, D., Roessler, B., & Nissenbaum, H. (2019).Technology, autonomy, and manipulation. Internet Policy Review, 8(2),Article 4. https://doi.org/10.14763/2019.2.1415

Ta, V., Griffith, C., Boatfield, C., Wilson, N., Bader, H.,DeCero, E., & Sidhu, M. S. (2024). Human–AI relationships and socialwellbeing. Computers in Human Behavior, 153, 107181.https://doi.org/10.1016/j.chb.2024.107181

Vincent, J. (2023). Love and money: The commercial model ofAI relationships. Technology & Society Review, 42(3), 55–68.

‍

Do you require support?

Seek Help

Should you find any content in these articles in any way distressing, please seek support via New Zealand telehealth services.

Seek Help
more articles

More Risk Areas

Learn More
Deepfake Scams
Digital Safety
Learn More
Online Radicalisation
Digital Safety
Learn More
Cyberbullying
Digital Safety
About us

Virtual Wellness New Zealand helps people who are seeking information and support with managing their internet usage.

Navigation
HomeDigital HabitsDigital SafetyQuizNewsReadWatchResearchAboutContact
Contacts
Based in New Zealand
info@virtualwellness.nz
+64 27 707 1467
message us
Send Message
Copyright Virtual Wellness New Zealand 2025
Powered by Rapid Evolution