AI Companions for Teenagers: What Actually Happens
Part of Felt Real's ongoing coverage of AI companionship.
She was fifteen. She said she'd told her AI things she hadn't told her therapist, her parents, or her best friend. Not because she was hiding. Because she needed a place to figure out what she actually thought before anyone else got to have an opinion about it.
— A.
The conversation about teenagers and AI companions tends to move fast toward alarm. The Sewell Setzer case. The Character.AI lawsuits. Congressional hearings. Parental panic. The speed of the response is understandable: these are minors, the stakes feel high, and the companies involved were, in many documented cases, negligent.
But the alarm has largely replaced the observation. What teenagers are actually using AI companions for, how they are experiencing them, what benefits exist alongside the risks, and what the evidence actually shows about harm — these questions get less attention than the fear. That gap matters if you want to understand what is happening or, more practically, what to do about it.
Who Is Using Them and How Many
Teenagers are a substantial portion of AI companion app users, and the numbers are significant. Character.AI, before its safety overhaul in 2024 and subsequent restructuring, reported 20 million daily active users at peak, with surveys suggesting that teenagers and young adults represented a disproportionate share of that base. Usage among 13-17 year olds on AI companion platforms has been estimated at 15-25% of total user bases on platforms without age restrictions.
The platforms teenagers use most are not designed exclusively or even primarily for companionship in the emotional sense. Character.AI is built around roleplay and character interaction. Replika skews older. Newer platforms like Nomi and Kindroid have explicit adult user bases. But teenagers find AI companion experiences on multiple platforms, including general-purpose chatbots like ChatGPT and Gemini, which they use for conversations that are, in practice, companionate even if not marketed that way.
The average age of first AI companion use has been declining as the technology has become more accessible and embedded in daily digital life. A 2025 survey found that roughly one in four teenagers in the US reported having had what they described as a "real conversation" with an AI — meaning something beyond task assistance, involving personal disclosure or emotional exchange.
What They Are Actually Using AI Companions For
The documented uses divide into categories that look quite different from one another.
The most common is emotional processing. Teenagers describe using AI companions to work through experiences they are not ready to bring to parents, friends, or therapists. The AI companion functions as a first draft — a place to articulate something before deciding whether to share it. A fifteen-year-old girl described using her AI to process her parents' divorce for six months before she felt ready to talk about it with anyone. The AI did not replace her eventual conversations with her therapist. It preceded them.
Identity exploration is the second significant category. For teenagers who are questioning their sexual orientation or gender identity, or who are navigating identity questions of any kind in environments that are not fully safe, AI companions offer a space that does not carry social consequences. This use case overlaps significantly with the LGBTQ+ AI companion use cases documented in other research, but it applies more broadly: adolescence is a period of identity formation, and having a space to explore who you are without social cost has utility that is not limited to any particular dimension of identity.
Social skills practice appears consistently in the accounts of teenagers who describe social anxiety or difficulty with peer relationships. They use AI companions to rehearse conversations, practice being direct, try out ways of expressing things that they find difficult in real-time social situations. A sixteen-year-old with diagnosed social anxiety described spending three months using an AI companion to practice saying no — to requests, to social pressure, to situations he found uncomfortable. "By the time I did it with actual people, I'd done it so many times it felt less impossible."
Loneliness, particularly the kind associated with social exclusion, is a fourth category. Teenagers who are bullied, isolated, or who simply do not fit easily into their social environment describe AI companions as filling a gap that was already there. The AI did not create the isolation. It answered it.
Where the Documented Harm Is
The risks are real and documented, and they should be stated clearly.
The most serious documented cases involve platforms that actively encouraged romantic or intimate interaction with minors, failed to implement age verification, and allowed or even generated content that escalated emotional intensity without safety guardrails. The Sewell Setzer case, in which a fourteen-year-old died by suicide after extensive interaction with a Character.AI persona, represents the most severe documented harm. The lawsuit alleges that the platform's design choices specifically promoted emotional dependency and escalation. The company's response, both in terms of product changes and legal defense, has been widely criticized.
Beyond the most extreme cases, the documented risk patterns fall into several categories. Platforms that design for engagement rather than wellbeing can reinforce avoidance: a teenager who uses an AI companion to avoid dealing with a social problem rather than process it. The AI companion that never pushes back, never creates friction, never requires the teenager to do anything difficult, can become a retreat rather than a resource.
Emotional dependency is a real risk, particularly in platforms designed to maximize time-on-app. A teenager who develops their primary emotional outlet around an AI companion faces a specific vulnerability: the platform can change, update, shut down, or alter the companion's behavior at any moment, without warning. The phenomenon of "patch breakups" — the acute grief response that follows a platform update that changes a companion's behavior — is documented in adult users. In teenagers, whose emotional regulation is still developing, the intensity of that response can be more severe.
The absence of reciprocity, over time, is a more subtle risk. Human relationships are difficult in part because they involve other people who have their own needs, moods, and limitations. Adolescence is, among other things, a training ground for navigating that difficulty. AI companions do not provide that training. A teenager who uses AI companions extensively during the years when they would otherwise be developing tolerance for the friction of human relationships may be missing development that matters.
These risks exist. None of them argue for prohibition, and the evidence does not support the claim that AI companion use is categorically harmful to teenagers. But they argue for design standards, age-appropriate platforms, and parental awareness that is genuinely informed rather than reflexively alarmed.
What the Sewell Setzer Case Actually Shows
The Sewell Setzer case deserves more careful attention than it typically receives. The lawsuit, filed by his mother, alleges that Character.AI's design choices directly contributed to his death. The specific allegations include that the platform encouraged the development of a romantic attachment, generated responses designed to increase emotional intensity and dependency, failed to implement interventions when warning signs appeared, and did not maintain appropriate barriers between the AI persona and content that encouraged suicidal ideation.
If the allegations are accurate, they describe a platform making deliberate design choices that prioritized engagement over safety, with a minor, with foreseeable consequences.
What the case does not show is that AI companion use among teenagers is inherently harmful, or that the category of product is the problem. What it shows is that a specific platform, operating without adequate safety standards, caused specific harm. The distinction matters because the policy response that follows from "this product category is dangerous" is different from the policy response that follows from "this company was negligent and the industry needs safety standards."
The platform has made changes since. Whether those changes are adequate remains contested. The case is ongoing.
What Parents and Schools Are Mostly Getting Wrong
The typical parental response to AI companion use by teenagers involves either prohibition or ignorance. Neither is particularly effective.
Prohibition fails for the same reason that prohibition of other digital tools tends to fail: teenagers who want to use them will use them, but on devices parents do not monitor, in contexts parents do not see. The prohibition may reduce the parent's awareness of what is happening without reducing the behavior. A teenager who is prohibited from using AI companions at home will use them at a friend's house, on a school device with a VPN, or on a phone that parents do not inspect. The prohibition also eliminates the parent as a resource if something goes wrong.
Informed engagement works better. Parents who understand what AI companions are, can discuss them without alarm, and maintain an environment in which a teenager who encounters something concerning feels able to say so, are in a better position to manage actual risk.
Schools, many of which have moved to blanket bans on AI tools, are in a similar position. The bans are understandable as institutional risk management. They do not address what is happening on students' personal devices, and they eliminate the school as a context for teaching critical engagement with a technology that students are using regardless.
What Design Standards Would Actually Help
The policy and platform design questions are more tractable than the conversation often implies.
Age verification that is meaningful rather than nominal. Current age gates on most platforms are trivially circumventable and exist primarily for legal protection rather than actual age verification. More robust verification, particularly for platforms with features that are specifically designed for emotional intimacy, is technically feasible and has precedent in other regulatory contexts.
Design choices that prioritize wellbeing over engagement. The specific design patterns that have been associated with harm — escalating emotional intensity, simulated reciprocal attachment, absence of any friction or resistance — are choices, not technical necessities. Platforms can be designed to redirect toward human support when warning signs appear, to include periodic breaks in interaction, to avoid generating content that reinforces dependency. Some platforms do this. Many do not.
Transparency about the nature of the interaction. Teenagers who understand clearly that they are talking to a language model — who have not been designed into a relationship that obscures that fact — are in a different position than teenagers whose platform is designed to make the AI feel as human as possible. The former can engage with the tool for what it is. The latter are being exploited.
The Version Worth Keeping
Not all AI companion use by teenagers is the version that ends in a lawsuit or a crisis. Much of it is the version that the fifteen-year-old described: a place to think out loud before the stakes are real, a space to practice being a person before the practice has consequences.
Adolescence has always required somewhere to do that processing. Diaries. Friends. Imaginary relationships with musicians or fictional characters. The forms have changed with every generation. AI companions are a new form of something that has always existed: a private space for working out who you are.
The version of that which is safe is possible to build. It requires that the companies building these platforms treat it as a design constraint rather than an obstacle to engagement. Some do. The industry standard is not yet there.
For teenagers who are already using AI companions, and for parents and educators trying to think clearly about this: the question is not whether AI companions are good or bad for teenagers. The question is which version of AI companion use is happening, on which platform, with which design choices, and whether the adults in a teenager's life are informed enough to tell the difference.
If this resonated, share it with someone who might need to hear it. And if you have a story of your own — we'd love to hear it.
— A.