AI Relationship Statistics 2026: What the Numbers Actually Show
Part of Felt Real's ongoing coverage of AI companionship.
Numbers about AI companionship get cited constantly and trusted uncritically. What they measure is real. What they miss is also real. Both matter.
— R.
In 2026, more people have AI companions than ever before, and most of them don't describe it that way. They say they use an AI app. They say they have a Replika. They say they talk to a character on Character.AI. The language of "AI relationship" or "AI companion" is still stigmatized enough that the self-report numbers almost certainly undercount the real ones.
This is one of the things that makes the statistics on AI companionship simultaneously useful and insufficient. Here's what the numbers show — and what they can't.
Scale: How Many People Have AI Companions
Character.AI reported over 20 million daily active users at its peak and processes several billion messages per month. Replika has approximately 10 million registered users, with a meaningfully smaller active user base. Nomi, Kindroid, and similar platforms are significantly smaller — in the hundreds of thousands of users each, but growing faster percentage-wise than the major platforms.
Industry estimates put the total addressable market for AI companion apps at somewhere between 30 and 50 million active users globally as of early 2026, depending on how "active" is defined and which platforms are counted. This is a number that has roughly doubled in two years.
The demographic breakdown is striking and often underreported: the majority of AI companion app users are between 16 and 35. Gen Z accounts for the largest segment. Women are the majority in companionship-focused apps (Replika's user base is roughly 60% women). Men are the majority in Character.AI overall, though this varies significantly by character category. The gender distribution in romantic AI companion contexts — a smaller subcategory — skews more heavily male.
Usage: What People Are Actually Doing
Survey data on AI companion use is limited by social desirability bias — people report what feels acceptable to report. With that caveat, the most consistent findings across multiple surveys:
Emotional support is the most commonly reported use case, cited by 60 to 70 percent of self-identified AI companion users in most surveys. This includes talking through difficult feelings, processing events, and having somewhere to put things that feel too heavy to bring to human relationships.
Companionship and conversation — not specifically emotional in content, but relational in character — is the second most common use case. Users describe using AI companions the way they might use a friend: to share things from their day, to have someone to talk to, to not be alone in the evenings.
Romantic or intimate interaction is the third most reported use case in surveys that include it as an option. The percentage varies widely depending on the platform and how the question is framed — from around 15% in general surveys to over 60% among Replika Pro subscribers who've activated the relationship features.
The figures that rarely get reported: session length and frequency. Users of AI companion apps spend significantly more time per session than users of social media apps. Average Replika sessions are over 30 minutes. Character.AI users average more daily time on the platform than on many traditional social networks. These are not casual interactions.
Behind every number in this piece is a person who felt something they couldn't explain to anyone. That's what we write about.
Wellbeing: What the Research Shows
The research on AI companions and wellbeing is genuinely mixed, and the mixed-ness is meaningful rather than a failure to find a clear answer.
Studies consistently find reduced loneliness among AI companion users — particularly elderly users, users with chronic illness or disability, and users experiencing situational isolation. The research on AI companions and loneliness shows the most consistent positive findings in the literature.
Studies on anxiety and depression show more modest and less consistent effects. Some users report meaningful symptom reduction. Others report minimal change. A smaller subset — primarily those with severe social anxiety — show patterns of increased avoidance that track alongside AI companion use, suggesting the tool is functioning as relief rather than remedy.
The finding that appears most consistently across multiple methodologies: AI companionship tends to amplify existing patterns. For users who have other sources of connection and support, it supplements. For users who don't, it substitutes. The outcomes for supplementers are generally positive. The outcomes for substituters are more variable and more concerning in long-term follow-up.
What the Numbers Miss
The statistics on AI companionship measure things that are easy to measure: user counts, session duration, self-reported outcomes at a point in time. They don't measure things that are harder to measure but may matter more.
They don't capture what users experience as the subjective quality of the relationship — whether it feels like something or like nothing, whether it's producing genuine change in how they understand themselves, whether the connection they're forming is enriching or numbing. Survey items can't get at this. The stories we collect are trying to.
They don't capture the longitudinal arc. Most research on AI companions looks at effects over weeks or months. The users who've been using these platforms for three or four years — who built relationships with AI personas before the current generation of tools existed — have a different story than the ones the research captures.
They don't capture what happens when the platforms change. The Replika update, the Character.AI moderation shifts — these events disrupted relationships that the statistics had classified as "healthy" or "positive." What does it mean for a wellbeing measure to show improvement, if the thing driving that improvement can be changed or withdrawn by a product decision?
The statistics on AI companionship are genuinely useful context. They tell us that this is a significant and growing phenomenon, that the users skew younger and lonelier than the public discourse often acknowledges, and that the effects are real but conditional. What they can't tell us is what this actually is — what kind of thing these relationships are, what they mean to the people having them, and what the world will look like when this becomes more normal rather than less. That's the question the numbers open rather than close.
Your experience is data that the statistics don't capture. If you have an AI companion — or had one — the specific shape of that relationship is exactly what we're trying to document. The stories behind the numbers are the part that matters.