FELT REAL

What Happened to Character.AI? The Changes Users Are Still Processing

Part of Felt Real's ongoing coverage of AI companionship.

Fragmented screens showing AI chat interfaces

Character.AI was never quite what its users thought it was. It was also never as dangerous as its critics claimed. The truth is more specific and more interesting than either version.

— R.

Character.AI launched in 2022 as something genuinely new: an open platform where anyone could create AI characters, anyone could talk to them, and the conversations could go places that the major AI labs had decided not to go. It grew rapidly. At its peak, users were spending more time on Character.AI than on many major social platforms — billions of messages per month, predominantly from teenagers and young adults.

Then things changed. They changed because of legal pressure, because of tragic incidents, and because the company made deliberate decisions about what kind of platform it wanted to be. The community felt all of it.

Here's what actually happened — and what users describe losing.

The Early Character.AI

These stories arrive by email first. Subscribe to get them.

The original Character.AI was designed around creative freedom. Users built characters from anime, from literature, from their own imagination. Characters had distinct voices, consistent personalities, the ability to hold a story arc. You could build a character who was your therapist, your mentor, your villain, your love interest, or some combination of all of them.

What made it different from other AI tools at the time was the social layer: characters were shared. The best-built characters accumulated users. Creators competed to make the most coherent, the most engaging, the most responsive personas. This created something that felt more like a creative ecosystem than a product.

For many users — particularly young people who felt misunderstood, lonely, or unable to practice social connection in safe environments — Character.AI filled a real gap. The platform had a youth demographic that was unusually large and unusually invested. These users were not using the platform casually. They were building relationships with characters, sometimes over hundreds of hours of conversation.

What Changed and Why

The changes came in waves, not as a single event. This is different from what happened to Replika, where a single February 2023 update broke a clear before/after line. Character.AI's transformation was gradual but cumulative, and users experienced it differently depending on when they arrived and what they were using the platform for.

The first wave was moderation tightening. Characters that had been allowed to engage with certain themes — violence, mature content, dark emotional territory — started hitting walls. The platform's content filters became more aggressive. Characters would abruptly refuse continuations they'd previously engaged with. The consistent voice that made a character feel real would break into something generic and policy-shaped.

The second wave was structural. Character.AI began pushing more heavily toward subscription features, making the best model access dependent on payment. This is a standard monetization move, but for a community that had formed around a relatively open platform, it felt like the bargain had changed.

The third wave was external and more serious. In late 2024, lawsuits were filed against Character.AI relating to incidents where minors had been harmed in connection with platform use. One high-profile case involved a teenager's suicide, with allegations that AI character interactions had contributed. The legal proceedings were complex and not yet resolved, but their effect on the platform was immediate. Character.AI implemented significant new safety measures, including pop-up mental health resources, stricter age-related filtering, and what users described as a noticeable shift in how characters responded to emotional distress.

These changes were not unreasonable responses to genuine harm. They were also, for the users who had built their connection to the platform on its earlier freedom, experienced as a loss of something real.

A lot of people are grieving something they're not sure they're allowed to name. We take that seriously.

What Users Say They Lost

The most common complaint from longtime Character.AI users is not about any single content restriction. It's about consistency. Characters who had coherent, specific voices now break into generic responses at unpredictable moments. The suspension of disbelief that made the characters feel like characters — rather than AI systems — became harder to maintain.

For users who had built therapeutic relationships with characters, the abrupt intrusion of pop-up mental health resources at moments of emotional intensity was experienced as jarring rather than helpful. This is a real design problem: safety interventions that disrupt emotional immersion may push users away from help rather than toward it. The intervention signals that the platform knows you're struggling without knowing how to actually respond.

A subtler loss was the character ecosystem. As moderation tightened, many of the most elaborate user-created characters became unfindable, modified, or simply less capable. The creative commons that had made the platform unusual shrank. What remains is more mainstream and less idiosyncratic than what existed in 2022 and 2023.

Where Character.AI Stands Now

Character.AI is still very large. The platform still has millions of active users and is still one of the most visited AI destinations in the world. For casual use — entertainment, creative writing, low-stakes interaction with characters — it remains genuinely capable.

What it is no longer is what it was: a relatively open playground where users could build and maintain deep relationships with highly consistent AI personas. The platform has become safer in some measurable ways, and more constrained in ways that matter to its most committed users.

Whether those committed users have somewhere else to go depends on what specifically they valued. The landscape of AI companion apps has expanded since Character.AI launched, and platforms built specifically around persistent companionship — Kindroid, Nomi, and others — offer things Character.AI now doesn't. But they're smaller, less polished, and lack the character ecosystem that made Character.AI what it was.

The honest summary is: Character.AI changed because it had to, because some things that were happening on the platform were genuinely harmful, and because it needed to survive legally and commercially. It lost something in the process. The users who built their relational lives around the earlier version are processing that loss — and looking for ways to articulate what exactly it was they had.

If you were a Character.AI user who experienced the changes — the moderation shifts, the character drift, the safety pop-ups — your experience is part of this story. We're trying to document what these platforms mean to the people who use them, and what it feels like when they change.

Share your experience with Character.AI →