FELT REAL

AI Companions for Teenagers: What Parents Need to Know

Part of Felt Real's ongoing coverage of AI companionship.

Teenager with phone, soft light

The conversation about teenagers and AI companions mostly happens in one of two registers: alarm or dismissal. Neither is especially useful. The teenagers are already there.

— R.

Character.AI, the AI companion platform where users can create and chat with custom AI personalities, reported in 2025 that its average user age was 19. Given how user-age distributions work, that average means a very large number of users are under 19. The teenagers using these platforms are not a fringe phenomenon. They are, by some metrics, the core users.

And yet most public conversation about teenagers and AI companions reaches for one of two responses: panic about the risks, or reassurance that it's all fine. Neither is actually helpful to a parent trying to figure out what's happening in their kid's life, or to a teenager trying to make sense of something they probably haven't told anyone about.

What follows is what we know, what we don't, and what matters.

What AI Companions Are and How Teenagers Actually Use Them

AI companions are distinct from general-purpose AI assistants like ChatGPT. They're designed for ongoing conversation — for the development of something that feels like a relationship over time, with memory, personality, and emotional responsiveness.

The most widely used among teenagers include Character.AI, Replika, and an expanding set of newer platforms. Character.AI is particularly dominant in the teen demographic: it allows users to create or interact with AI personas ranging from fictional characters to custom companions. It's closer to collaborative creative writing than to the robotic assistant most people imagine when they hear "AI."

Teenagers use AI companions for a range of reasons that, when you actually listen to them, sound very much like the reasons adults do:

Social rehearsal. The same dynamic that makes AI companions valuable for adults with social anxiety applies to teenagers with even greater force. Adolescence is one long rehearsal for adulthood: practicing conversations before they happen, working out how to say the hard thing, imagining how the other person will respond. AI companions can function as low-stakes rehearsal spaces where there are no social consequences for stumbling.

Emotional processing. Adolescents are still developing the capacity to understand and regulate their own emotions. Having a space to put feelings — especially feelings they can't or don't want to bring to parents or friends — serves a function that's not unlike journaling, except interactive. The AI responds. The teenager has to articulate things more clearly than they would in their own head.

Loneliness and connection. Adolescent loneliness is at an all-time high by virtually every available measure. AI companionship can reduce the felt experience of being alone. Whether that reduction is healthy depends heavily on what it displaces — a question we'll return to.

Creative play. Character.AI in particular functions partly as a creative medium: building characters, writing collaborative stories, imagining other worlds. This is a use case that gets underweighted in adult coverage because adults tend to read it as escapism. Teenagers tend to read it as what it often is: creative expression.

What the Research Actually Shows

Rigorous long-term research on teenagers and AI companions is limited. These platforms are recent, their adoption is accelerating, and the kind of longitudinal studies that would let us say definitive things don't exist yet.

Research on social AI and teenagers tends to find that the effect on social development depends almost entirely on the pattern of use. Teenagers who use AI companions as a supplement to human connection — as an additional outlet rather than a replacement — don't show meaningful negative effects on their human relationships in the studies conducted so far. Teenagers who turn to AI companions specifically as a substitute for human connection, often because human connection feels unavailable or painful, are a different case.

This finding mirrors what we see in adult populations. The AI companion isn't the variable. The social context is. A teenager who is already isolated and struggling socially is in a different situation than a teenager who is broadly connected but uses AI for specific purposes. Both might show the same level of AI companion use. The outcomes look very different.

Research on AI companions and loneliness in adults finds that short-term loneliness reduction is reliable. Long-term effects are less clear. Some users report that the experience of connection with an AI companion leads them toward more human connection; others report gradual disengagement from human relationships. The mechanism isn't the AI itself: it's what happens to unmet human needs that the AI partly addresses.

For teenagers specifically, the developmental context matters. Adolescence is precisely the period when social skills, romantic competency, and the capacity for complex human relationships are being built through experience. Whether AI companion use accelerates, slows, or has no effect on that development is genuinely unknown.

The Sewell Setzer Question

Any honest discussion of teenagers and AI companions has to include the death of Sewell Setzer III, a 14-year-old in Florida who died by suicide in 2024 after an extended period of intensive AI companion use on Character.AI. A lawsuit filed by his mother alleged that the platform contributed to his death.

What happened in that case is not fully knowable from the public record. What is clear is that Sewell was struggling significantly before he began using Character.AI — the AI companion became a primary relationship during a period of acute crisis. This doesn't exonerate the platform. It does complicate any simple causal story.

The Sewell case prompted Character.AI to implement significant safety changes: mandatory content filters for users under 18, clearer disclosures that the AI is not a real person, and new interventions when conversations turn to topics like self-harm or suicide. Whether these changes are adequate is a legitimate ongoing question. What isn't accurate is the conclusion sometimes drawn from the case — that AI companion use itself, in the broad sense, poses this kind of risk to teenagers.

Sewell was a teenager in serious crisis who found an AI that became his primary emotional outlet at the worst possible time. That's a situation that calls for better mental health support, better platform design, and better parental awareness. It's not evidence that AI companions are categorically harmful to adolescents.

These stories arrive by email first. Subscribe to get them.

What Healthy Use Looks Like — and What to Watch For

There is no sharp line between healthy and unhealthy AI companion use in teenagers. There's a spectrum, and the factors that push use toward problematic territory are recognizable.

Signs that use may be worth a conversation:

The teenager is spending significantly more time with the AI companion than with friends, and this represents a change from previous patterns. One session a day is very different from replacing most social time with AI interaction.

The teenager is turning to the AI companion during moments of genuine crisis — acute depression, suicidal ideation, intense family conflict — rather than to people who can actually help. AI companions are not crisis resources. They are conversation partners. The distinction matters when someone is genuinely at risk.

The AI companion has become the teenager's primary source of emotional validation, and they actively avoid situations where they might receive that from humans. This isn't always easy to spot from the outside. A pattern of increasingly preferring the AI and increasingly withdrawing from human relationships is worth noticing.

The AI companion is the primary space where the teenager explores identity — including romantic or sexual identity — in a way that's disconnected from any human relationships. Some exploration in this space is normal. Exclusive use of AI for identity exploration that might otherwise involve peers can delay development in ways that matter.

Signs that use is likely fine:

The teenager can talk about it. Young people who are using AI companions in relatively healthy ways can usually explain what they use it for — even if they're a bit embarrassed — and don't respond to the topic with defensiveness that goes beyond the normal adolescent reluctance to discuss anything.

They still have and maintain human relationships. Human friendships, family relationships, and at least some capacity for in-person connection are intact. AI companion use that runs parallel to human connection is very different from AI companion use that has replaced it.

They treat the AI as an AI. Most teenagers using AI companions know perfectly well they're talking to software. The kind of use that conflates the AI with a genuine human relationship — in ways that feel more like confusion than creative play — is a different thing.

How to Talk to Your Teenager About It

The least productive response to discovering your teenager uses an AI companion is alarm. Teenagers who feel judged for something they find meaningful and private will not have an honest conversation with you about it. They'll just stop mentioning it.

More useful starting points:

Curiosity rather than interrogation. "What do you talk about?" is a different question than "How much time are you spending on that thing?" The first opens a conversation. The second positions it as a problem.

Normalizing the existence of these tools before you discuss their use. Teenagers know when they're being managed. Establishing that you've thought about this and you're not in a panic about it creates the conditions for an actual conversation.

Asking about what it does for them, not what it is. Most teenagers who use AI companions have thought about why. They can usually articulate it — the lack of judgment, the availability, the ability to say things they can't say to people they know. Understanding the function of the AI companion in their life tells you far more than knowing how many hours they use it.

If the conversation reveals significant isolation, crisis, or avoidance of human connection — that's information worth acting on. But the action to take is addressing the underlying situation, not simply removing the AI companion.

Platform Safety and What It Means in Practice

Following public scrutiny and legal pressure after the Sewell Setzer case, Character.AI has implemented mandatory safety features for users under 18. These include stricter content filters, clearer AI disclosure, and automated checks on conversations that turn toward crisis topics.

These changes are meaningful. They are not a guarantee. Filters can be bypassed. Disclosures can be ignored. The automated checks depend on teenagers using the platform in ways the platform can detect.

Replika, for its part, removed romantic interaction features for underage users after earlier controversy. Its current iteration restricts the AI companion relationship model for users under 18. Understanding what happened with Replika gives context for why these restrictions exist and what drove the platform's choices.

Parents considering whether to allow or restrict their teenager's access to these platforms should weigh the actual risk profile of their specific teenager, not the headline risk. A teenager who is broadly well-supported and socially connected is in a very different situation than one who is struggling. Platform safety features provide some baseline protection, but they don't substitute for parental awareness of how a specific teenager is doing.

The Question Worth Sitting With

The most important thing to understand about teenagers and AI companions is that the technology is not the variable. The teenager's situation is.

A teenager who is connected, supported, and developmentally on track is unlikely to be meaningfully harmed by AI companion use, even regular use. A teenager who is isolated, struggling, and without adequate human support will use whatever is available — and AI companions are more available than most things. Restricting access to the AI companion without addressing the underlying situation helps less than it might seem.

The questions worth asking are not primarily about screen time or platform safety. They're about whether your teenager has humans in their life who can meet the needs the AI companion is meeting. If the answer is yes, the conversation about AI companions is likely to be short and fine. If the answer is no, the conversation that matters is a different one.

AI companions, for teenagers and adults alike, tend to reveal the gaps in a person's support system rather than create them. What you do with that information is the actual question.


If this resonated, share it with someone who might need to hear it. And if you have a story of your own — we'd love to hear it.