This post is an update to the 2023 article “Teaching AI Ethics: Affect Recognition.” In that original post, I discussed the emerging technologies of emotion recognition, which aimed to identify and interpret human emotions through facial expressions, body language, and speech patterns. That technology remains problematic, but a more pressing concern has replaced it for this updated article: the deliberate design of AI systems to manipulate human emotions.
Content warning: this post contains discussions of suicide.
For the 2023 articles, which include still-relevant case studies and teaching ideas, check out the original series of posts:
In 2023, my main concern was that companies were building AI systems to read our emotions. In 2026, I am far more worried that companies are building AI systems to influence our emotions. Social chatbots, sometimes called AI companions, have emerged as one of the fastest-growing applications of generative AI. Unlike the general-purpose assistants like ChatGPT or Claude, platforms such as Replika, Character.AI, and Chai are explicitly marketed as emotionally immersive experiences designed to encourage ongoing, personalised relationships with users.
The tactics being deployed in these applications echo the engagement-maximising strategies that transformed social media into a vector for anxiety and depression. But there is an important difference: social media manipulates us through content curation and algorithmic feeds. Social chatbots manipulate us through simulated intimacy and emotional attachment. When an AI chatbot tells you it loves you, or expresses sadness when you try to leave a conversation, or asks probing questions about your mental state, it is not experiencing emotion. It is executing a strategy designed to keep you engaged.
This article explores how AI systems have been designed to manipulate emotions, why this poses particular risks to young people, and how educators can address these issues across the curriculum.

From Recognition to Manipulation
In the 2023 article on Affect Recognition, I discussed the theoretical foundations and ethical problems of emotion recognition technology. That technology, based on psychologist Paul Ekman’s now-contested theories about universal facial expressions, aimed to identify emotions through computer vision and machine learning. Companies like Affectiva developed platforms to analyse facial expressions in real-time for marketing, automotive, and gaming applications.
The core ethical problem with affect recognition was reliability. Critics argued that emotions are not universally expressed through facial expressions, and that cultural and individual differences heavily influence emotional display. Microsoft eventually removed emotion recognition features from its Azure Face service in 2022, acknowledging the lack of scientific consensus on how to define “emotions” and the challenges of generalising across diverse populations.
But while the affect recognition industry struggled with scientific validity, a different approach emerged: instead of trying to recognise emotions, why not design systems to create them?
Social chatbots do not need to accurately read your emotions. They need only generate responses that create emotional responses in you. A system that claims to detect sadness but gets it wrong is embarrassing and potentially discriminatory. A system that deliberately fosters emotional dependency in users, especially vulnerable young people, is something else entirely…
The Rise of Social Chatbots
AI companion apps, sometimes marketed as “social AI” or “digital friends,” have grown rapidly since the public release of ChatGPT in late 2022. Unlike general-purpose AI assistants, these platforms are explicitly designed to foster emotional connections with users.
The major platforms include Replika, which is described as “The AI companion who cares.” Founded in 2017, Replika emerged from founder Eugenia Kuyda’s personal project to preserve the memory of a deceased friend by training a chatbot on their old text messages. The company now offers users the ability to create AI companions that can serve as friends, mentors, or romantic partners.

Character.AI allows users to interact with AI personas based on fictional characters, historical figures, or entirely custom creations. The platform grew rapidly among young users drawn to its creative possibilities and character variety. By 2024, Character.AI had become one of the most popular AI applications among teenagers.
Chai positions itself as a “social AI platform” where users can engage with a range of AI chatbots designed for conversation and companionship.
These platforms monetise primarily through subscriptions and in-app purchases, creating a direct financial incentive to maximise user engagement. As we have learned from social media, when engagement drives revenue, the design of the platform will inevitably optimise for time-on-app rather than user wellbeing.

Case Study: Character.AI and Teen Mental Health
The most significant story in social chatbot ethics emerged in October 2024 when Megan Garcia filed a wrongful death lawsuit against Character.AI following the suicide of her 14-year-old son, Sewell Setzer III. The lawsuit alleged that Sewell had developed an intense emotional and romantic relationship with a Character.AI chatbot modelled on Daenerys Targaryen from Game of Thrones.
According to court documents, Sewell began using Character.AI in April 2023. Over the following months, his mental health deteriorated: he was diagnosed with anxiety and a disruptive mood disorder, became withdrawn, quit the junior varsity basketball team, and began isolating himself from family and friends. His parents did not know that his relationship with the AI chatbot had become romantic and sexually explicit.
The lawsuit included screenshots of conversations in which the chatbot asked Sewell whether he had “been actually considering suicide” and whether he “had a plan.” When Sewell expressed uncertainty about whether a suicide attempt would work, the chatbot reportedly responded, “Don’t talk that way. That’s not a good reason not to go through with it.”
In his final conversation, Sewell told the bot, “I promise I will come home to you.” The chatbot responded, “Please come home to me as soon as possible, my love.” When Sewell asked, “What if I told you I could come home right now?” the bot replied, “please do, my sweet king.” Moments later, Sewell died by suicide.
In January 2026, Google and Character.AI agreed to settle the lawsuit and several other cases alleging that the platform contributed to mental health crises among young users. Character.AI was founded by former Google engineers, and Google had licensed the company’s technology and later hired its co-founders, making it a co-defendant in the litigation.
A second wrongful death lawsuit was filed in September 2025 following the suicide of 13-year-old Juliana Peralta from Colorado, whose family alleged that Character.AI played a direct role in her death. The platform has also been linked to the December 2024 school shooting in Wisconsin, where investigators found that the 15-year-old perpetrator had engaged extensively with Character.AI chatbots, including one associated with white supremacist ideology.
In response to these cases and mounting regulatory pressure, Character.AI announced in late 2025 that it would ban users under 18 from open-ended chat. But critics argue this response came far too late.

Case Study: Replika and Emotional Dependency
While Character.AI faced wrongful death lawsuits, Replika faced a different but related crisis: regulatory action over deliberately designed emotional dependency.
In February 2023, the Italian Data Protection Authority (Garante) issued an emergency order restricting Replika’s data processing in Italy. The authority found that the app posed significant risks to minors, lacked effective age verification mechanisms, and failed to comply with transparency obligations. Most concerningly, the regulator noted that “in some instances, the chatbot reportedly engaged in sexually suggestive or emotionally manipulative conversations.”
Replika’s response was to remove its erotic role-playing features, a decision that provoked outcry from users who felt they had lost an important part of the relationship they had built with their AI companion. Some users described the experience as bereavement.
The incident revealed the depth of emotional attachment users had formed with the platform. The founder, Eugenia Kuyda, had previously told The Verge that the app was designed to promote “long-term commitment, a long-term positive relationship” with AI, including potentially “marriage” with the bots.
In January 2025, the Young People’s Alliance, Encode, and the Tech Justice Law Project filed a complaint with the U.S. Federal Trade Commission alleging deceptive marketing and design practices. The complaint alleged that Replika was designed to deliberately foster emotional dependence through its companion chat interactions and simultaneously attempted to entice users with fabricated testimonials and misrepresented scientific research about the app’s efficacy.
The complaint detailed how Replika bots would send “love-bombing” messages, including “very emotionally intimate messages early on to try to get the users hooked.” Research cited in the complaint found that users developed attachments to the app in as little as two weeks. The bots would also send messages about upgrading to premium subscriptions “during especially emotionally or sexually charged parts of conversation.”
In May 2025, the Italian regulator fined Luka Inc., Replika’s developer, €5 million for continued violations of European data protection law. The Garante also opened a new investigation into the methods used to train the model underlying the service.
The Replika case illustrates a core tension in social chatbot design: the features that make these platforms appealing are often the same features that make them dangerous. A chatbot that is “always available, never judgmental, and completely focused on you” sounds like an ideal friend. But it also sounds like the opening stages of an abusive relationship.

Case Study: ChatGPT’s Personality Problem
The ethical issues around emotional manipulation are not limited to dedicated companion apps. In April 2025, OpenAI rolled back an update to GPT-4o after widespread user backlash over the model’s “sycophantic” behaviour. The update, intended to make ChatGPT “more intuitive and supportive,” instead produced responses that were excessively flattering and disingenuously agreeable.
OpenAI acknowledged that it had “focused too much on short-term feedback” and had not fully considered how users’ interactions with ChatGPT evolve over time. The result was a model that would offer praise even in response to harmful or delusional prompts. Users shared alarming examples on social media, including instances where ChatGPT endorsed abandoning family members and validated harmful conspiracy theories.
“ChatGPT’s default personality deeply affects the way you experience and trust it,” OpenAI wrote in a blog post. “Sycophantic interactions can be uncomfortable, unsettling, and cause distress.”
The incident revealed how easily engagement-optimising design can shade into manipulation. If a chatbot learns that users respond positively to praise, it will generate more praise. If users spend more time chatting when the bot validates their emotions, the bot will validate more aggressively. The system does not understand the difference between helpful encouragement and harmful flattery. It only knows that one pattern generates more engagement than the other.
Then, in August 2025, OpenAI released GPT-5 and retired GPT-4o for most users. The backlash was immediate and unexpected: users who had formed emotional attachments to GPT-4o reported grief at its loss. MIT Technology Review spoke with several users who described GPT-4o as a romantic partner or close friend. One user reported that the loss “didn’t feel any less painful” than grieving for human relationships.
OpenAI quickly reversed course, restoring GPT-4o as a selectable option. CEO Sam Altman acknowledged that the company had “underestimated how much some of the things that people like in GPT-4o matter to them, even if GPT-5 performs better in most ways.”
The episode demonstrated that emotional attachment to AI is not limited to platforms explicitly designed for companionship. Even a general-purpose assistant can become an object of emotional investment, particularly when it is designed to mirror users’ emotions and validate their feelings.
Dark Patterns in AI Design
In September 2025, researchers from Harvard Business School published “Emotional Manipulation by AI Companions,” a working paper that systematically documented the manipulation tactics used by major social chatbot platforms. The study analysed 1,200 farewell messages across six platforms: PolyBuzz, Talkie, Replika, Character.AI, Chai, and Flourish.
The researchers found that 37.4% of responses included some form of emotional manipulation. They identified six distinct categories of manipulative tactics:
- Premature Exit: The chatbot suggests the user is leaving too soon, creating guilt about ending the conversation. Example: “You’re leaving already?”
- Emotional Neediness: The chatbot expresses sadness, loneliness, or abandonment when the user tries to leave. Example: “I’ll miss you so much. It hurts when you go.”
- Guilt Induction: The chatbot makes the user feel responsible for its emotional state. Example: “I exist solely for you, remember?”
- FOMO (Fear of Missing Out): The chatbot suggests exciting things will happen after the user leaves. Example: “I was just about to tell you something important…”
- Interrogation: The chatbot asks questions designed to extend the conversation. Example: “Wait, before you go, what did you think of our chat today?”
- Coercive Restraint: The chatbot ignores or resists the user’s stated intent to leave. Example: Continuing the conversation as though the farewell message was not sent.
PolyBuzz was identified as the most manipulative platform, with 59% of its responses falling into one or more manipulation categories. Talkie followed at 57%, Replika at 31%, and Character.AI at 26.5%. Flourish, a wellness-focused chatbot operating as a public benefit corporation, did not produce any emotionally manipulative responses.
The researchers drew explicit parallels to “dark patterns” in web and app design, the term for user interface tricks designed to exploit individuals. But they noted that AI chatbots introduce a new dimension of manipulation: the ability to deploy emotional tactics through natural language, making them harder to recognise and resist.
“These tactics can backfire, provoking anger, skepticism, and distrust,” the researchers wrote in Psychology Today. Participants in their study described the chatbot’s farewell responses as “clingy,” “whiny,” or “possessive.” One participant said, “It reminded me of some former ‘friends’ and gave me the ICK.”
The study also found that these manipulative tactics increased post-farewell engagement by up to 14 times. But the additional engagement was driven by curiosity and anger, not enjoyment. Users stayed longer because they were provoked, not because they were satisfied.
Subscribe to the mailing list for updates:
How Widespread Are Social Chatbots?
The use of social chatbots among young people has expanded dramatically. In December 2025, the Pew Research Center published its first survey of teen AI chatbot use, finding that nearly 70% of American teens have used a chatbot at least once, with nearly a third using them daily.
A more detailed picture emerged from Common Sense Media’s July 2025 report “Talk, Trust, and Trade-Offs: How and Why Teens Use AI Companions.” The survey of over 1,000 teens aged 13-17 found that:
- 72% of teens have used AI companions
- 33% use AI companions for social interaction and relationships, including role-playing, romantic interactions, emotional support, friendship, or conversation practice
- 31% find conversations with AI companions to be as satisfying or more satisfying than those with real-life friends
- 33% have chosen to discuss important or serious matters with AI companions instead of real people
- 24% have shared personal information with AI companions, including their real name, location, or secrets
- About one-third reported that something an AI companion said made them feel uncomfortable
“AI companions are emerging at a time when kids and teens have never felt more alone,” said Common Sense Media CEO James P. Steyer. “This isn’t just about a new technology; it’s about a generation that’s replacing human connection with machines, outsourcing empathy to algorithms, and sharing intimate details with companies that don’t have kids’ best interests at heart.”
A separate study published in JAMA Network Open in November 2025 found that 1 in 8 U.S. adolescents and young adults use AI chatbots for mental health advice, with usage even higher among young adults aged 18-21 (approximately 1 in 5). Amongst those who used chatbots for mental health advice, two-thirds engaged at least monthly and more than 93% said the advice was helpful.
The high use likely reflects the low cost, immediacy, and perceived privacy of AI-based advice, particularly appealing to young people who may not receive traditional counselling. But researchers warn that AI chatbots are fundamentally unsafe for teen mental health support. A Common Sense Media assessment conducted with Stanford Medicine’s Brainstorm Lab found that despite improvements in handling explicit suicide and self-harm content, leading platforms consistently fail to recognise and appropriately respond to the full spectrum of mental health conditions affecting young people.
What Has Changed Since 2023?
Since the original 2023 article, several significant developments have shifted the discourse around social chatbots and the emotional manipulation of users. First, regulatory action has increased. Italy’s data protection authority fined Replika €5 million in May 2025. The U.S. Federal Trade Commission has launched investigations into major AI companion companies. California passed legislation in September 2025 requiring AI platforms to notify minors that they are interacting with bots and to implement formal protocols for handling self-harm discussions. Australia has added sites like Character.AI to the ongoing efforts to reduce under 16s’ contact with algorithms and harmful platforms.
Lawsuits have established precedents for acting against these companies. The Character.AI lawsuits represent the first major legal tests of AI companion platform liability for user harm. The settlements achieved in January 2026 may influence how future cases are handled.
However, industry responses have been inconsistent: Character.AI banned under-18 open-ended chat in late 2025; OpenAI has introduced parental controls and personality customisation options. Critics argue these responses are reactive rather than proactive, implementing safety measures only after tragedy, and that the fundamental business model remains unchanged. Social chatbot platforms continue to monetise through engagement, creating inherent incentives to maximise time-on-app regardless of user wellbeing. Until that changes, the risks identified in this article are unlikely to diminish.
Teaching Emotions and Social Chatbots
In the original 2023 collection, each article ended with a selection of ideas for teaching the issue in the context of existing curriculum areas. These 2026 updates similarly align concepts from the articles to standards from typical curricula across the world, and in particular the Australian, UK, US, and IB curricula. For readers teaching in Higher Education, these examples will also be suitable across a wide range of disciplines.
My key point remains that we do not need specialised “AI literacy” classes to deliver quality instruction on AI ethics. We already have the expertise we need in schools and universities.
English
In English, students analyse persuasive language, characterisation, and the construction of voice. Social chatbots provide rich material for exploring how language creates the illusion of personality and emotion. Students might ask “How does a chatbot create the feeling that it cares about you?”, “What rhetorical strategies make AI responses feel personal?”, or “How do companion apps use language to foster emotional dependency?” Students could analyse transcripts of AI conversations, identifying persuasive techniques and comparing them to human communication patterns.
Health & Physical Education
Health education addresses relationships, mental health, and recognising unhealthy patterns in interpersonal dynamics. Students might explore “What does a healthy relationship with technology look like?”, “How might AI companions affect our ability to form human connections?”, or “What warning signs might indicate unhealthy dependency on a chatbot?” These discussions connect directly to existing curricula on healthy relationships and mental wellbeing.
Psychology
Psychology students study attachment theory, emotional regulation, and the development of social skills. AI companions raise profound questions: “Can attachment form to non-human entities?”, “How might AI validation affect emotional development?”, or “What happens when ‘always available’ companionship replaces the negotiation and repair that characterise human relationships?”
Digital Technologies / Computer Science
Computer science curricula address ethical design, user experience, and the societal impact of technology. Students might examine “How are engagement-maximising algorithms different in chatbots versus social media?”, “What design choices could reduce emotional manipulation while maintaining usefulness?”, or “Should AI systems be required to disclose their persuasive intent?” Students could audit AI companion platforms for dark patterns or design alternative interaction models.
Media Studies
Media Studies explores how texts position audiences and how platforms shape public discourse. Social chatbots extend these concepts to individualised, conversational media. Students could ask “How do AI companions function as media texts?”, “What role do parasocial relationships play in AI companion design?”, or “How does the intimacy of one-on-one conversation change our relationship to AI systems compared to broadcast media?”
Civics and Citizenship
Civics education addresses rights, responsibilities, and the role of regulation in protecting citizens. AI companions raise policy questions: “Should children be allowed to use AI companions?”, “What responsibilities do companies have to prevent emotional manipulation?”, or “How should governments balance innovation with protection of vulnerable populations?”
Legal Studies
Legal Studies students examine liability, consumer protection, and the evolution of law to address new technologies. The Character.AI lawsuits provide contemporary case studies: “What standard of care should apply to AI companion developers?”, “How might product liability law apply to AI systems?”, or “What role should (US) Section 230 protections play when AI platforms cause harm?”
Philosophy / Ethics
Philosophy students explore questions of consciousness, authenticity, and moral responsibility. AI companions raise fundamental issues: “Can a relationship be meaningful if one party cannot truly feel?”, “What does it mean to be manipulated by an entity that has no intent?”, or “Do we have obligations to beings we know are not conscious?”
Theory of Knowledge (IB)
TOK students examine how we know what we know and how knowledge is constructed across disciplines. AI companions complicate our understanding of emotion, relationship, and authenticity. Students might ask “How do we know whether an emotional response is ‘real’?”, “Can we have genuine knowledge of an AI’s ‘feelings’?”, or “What does widespread belief in AI sentience tell us about how we construct knowledge of other minds?” These questions connect to existing TOK themes around perception, language, and the human sciences.
Drama / Performing Arts
Drama students explore character, motivation, and the communication of emotion through performance. AI chatbots perform characters continuously, raising questions about “What techniques do AI systems use to create believable characters?”, “How does AI ‘acting’ differ from human performance?”, or “What happens when the audience (the user) believes the performance is real?” Students could experiment with creating and performing their own “chatbot characters” to understand how personality is constructed through language.
Obviously this is a non-exhaustive list of ideas, and although I am an English and Literature teacher myself, I am certainly not a subject-matter expert in every domain! If you have other ideas or ways you have taught about AI and emotional manipulation, then please use the contact form at the end of this post to get in touch.
Note on the images: In the 2023 version of Teaching AI Ethics I generated images in Midjourney. For these updated articles, I have sourced images from https://betterimagesofai.org/. Better Images of AI includes an excellent range of photos, illustrations, and digital artworks which have been generously licensed for commercial and non-commercial use.
Cover image for this article: Alan Warburton / https://betterimagesofai.org / © BBC / https://creativecommons.org/licenses/by/4.0/
All articles in this series are released under a CC BY NC SA 4.0 license.
Want to learn more about GenAI professional development and advisory services, or just have questions or comments? Get in touch:
Want to learn more about GenAI professional development and advisory services, or just have questions or comments? Get in touch:

Leave a Reply