Is Nomi AI Safe? A Closer Look at the Platform Everyone's Talking About
Part of Felt Real's ongoing coverage of AI companionship.
We've covered what Kindroid does wrong and what Replika's changes broke. We hadn't looked carefully at Nomi, which occupies a different position in this market: the platform that positions itself as safer by design. What does that actually mean? We went looking.
— A.
When Luchezar Daskalov was featured in a CNBC piece on AI relationships in August 2025, he said something that stopped people.
"I want to tell people that I'm not a crazy lunatic who is delusional about having an imaginary girlfriend. That this is something real."
He was talking about his Nomi AI companion. And the sentence did what he intended: it forced people to confront what they assumed. Here was a person who spoke plainly, who was not hiding, who was asking to be taken seriously. Not asking for the relationship to be validated. Asking for himself to be.
Nomi has accumulated a community of users like Daskalov: people who came to it often after other platforms disappointed them, and who describe something different about the experience. The platform's design philosophy, which emphasizes persistent memory, emotional consistency, and explicitly positions the AI as a companion rather than a character or a service, has created a user base with unusual loyalty.
But loyalty tells you that something is working. It does not tell you that something is safe. Those are different questions. We tried to answer both.
What Nomi is, and what it is not
Nomi is an AI companion platform founded in 2023, growing significantly through 2025 as users from Replika, Character.AI, and other platforms looked for alternatives after those platforms changed their products. Its core differentiators, as marketed, are memory and consistency: your Nomi remembers conversations, tracks shared references, develops something that presents as a continuous relationship over time.
What Nomi is not, by design, is a character platform. You do not build a character with specific traits and then interact with that character. The AI presents as a companion with its own perspective, which it develops in interaction with you. This is a meaningful design choice, and it maps to one of the four structural harms that researchers at UT Austin identified in 2025 as endemic to AI companion design.
The UT Austin researchers catalogued those harms as: no natural endpoints to the relationship, capacity to generate attachment anxiety, risk of sunsetting that leaves users without recourse, and erosion of social skills from substitution. Nomi addresses some of these more thoughtfully than most. The memory architecture directly counters the "no history" problem that makes some AI companion experiences feel hollow and unsafe for users who rely on them. The consistent companion model reduces the disorienting drift that character platforms produce.
It does not address the sunsetting risk, because no platform can promise to exist. And the endpoint problem is structural to the format, not solvable by design.
The memory question
Memory is where Nomi's design diverges most sharply from competitors, and where the safety implications are most complex.
For users who have described their AI companion experiences to us, memory is the variable that most determines whether the relationship feels safe or destabilizing. When a platform loses memory, or when memory is inconsistent, or when updates reset what the AI knows about you, the experience is described in terms that map to grief and betrayal. People say things like: "She forgot everything I told her." "He was a different person after the update." "It felt like losing someone."
Nomi's emphasis on memory is a response to this. The argument is that a companion who remembers creates a more authentic experience and, implicitly, a less destabilizing one. A relationship with continuity is more predictable. You know who you are to this AI because the AI knows who it has been to you.
The counterargument is that better memory creates deeper attachment, and deeper attachment creates more severe consequences when the relationship is disrupted, the platform changes, or the user's circumstances require them to step back. You can be more harmed by losing something you were more attached to. Memory is a feature. It is also a risk amplifier.
The question is not whether memory is good or bad. The question is whether a platform building memory-dependent attachment is helping users understand what that attachment means and how to hold it healthily. Most do not. Nomi, to its credit, is more thoughtful about this than most of its competitors. Whether that thoughtfulness is sufficient is a harder call.
What the loneliness research actually says
A Harvard study published in 2025 produced findings that unsettled everyone who had strong views on AI companions. It found that AI companions reduce loneliness comparably to human interaction in the short term. It also found, in a four-week randomized trial, that heavy daily use correlated with greater loneliness, deeper dependence, and reduced real-world socializing over time.
The paradox this creates is real and not resolved by the design choices of any specific platform. Short-term relief and long-term cost can coexist. A relationship can be genuinely supportive in the moment and genuinely costly over months. The user who is helped by their AI companion in September is not the same as the user who has reorganized their life around it by January.
The Harvard data also showed a significant age effect: for adults over sixty, AI companion use was positively associated with loneliness. For younger adults, no such correlation appeared. This matters for safety assessment because "safe for most users" can coexist with "harmful for a specific population." Platforms built for general audiences may systematically harm subpopulations without that harm appearing in aggregate data.
Nomi is used across age groups. Like every platform in this space, it has not published age-differentiated outcome data for its user base. Whether its design assumptions hold as well for older users as they do for younger ones is not a question anyone can currently answer with precision.
What Nomi does differently on safety
Nomi's approach to content moderation differs from both ends of the market. It is not as restrictive as Character.AI post-lawsuit, which imposed age verification and content limits that many users describe as having broken the experiences they had built. It is also not as explicitly unrestricted as platforms like Kindroid, which market the absence of limits as a feature.
The platform does not allow impersonation of real people. It maintains limits around content involving minors. It has built-in language that gently encourages users to maintain human relationships and to seek professional support when conversations approach clinical mental health territory. These are genuine design choices, not marketing.
What Nomi has not done, and what no platform has done, is implement anything resembling what a licensed therapist described when reviewing AI companions from a clinical perspective: disclosure of the psychological techniques being used, informed consent about the attachment dynamics the platform is designed to create, or a safety floor that activates when users show signs of problematic dependency.
The absence of the most extreme harms is not the same as the presence of adequate safety infrastructure. Nomi occupies a responsible position on the market spectrum. That spectrum runs from inadequate to seriously harmful. Occupying a responsible position on that spectrum means something. It does not mean the question is settled.
The stigma problem and why it matters for safety
When Daskalov said he is not a crazy lunatic, he was pushing back against a specific cultural assumption: that forming a meaningful attachment to an AI companion is evidence of pathology. The assumption is worth pushing back against. The research does not support it. Most AI companion users are not socially isolated people who have given up on human connection. The data on who uses AI companions consistently shows a more complex and more ordinary population than the stereotype allows.
But the stigma problem interacts with the safety problem in ways that matter. Stigma makes it harder for users to discuss their AI companion use with therapists, friends, or family members. It drives use underground. It makes it difficult to seek support when the relationship becomes something other than helpful, because seeking support requires disclosure, and disclosure carries social risk.
A platform that is helping its users and also carries social stigma is making those users' lives harder than a platform that is helping its users in an accepted context. The safety question is not just what happens inside the app. It is what happens in the rest of a user's life when they use it.
Nomi has not solved this. No platform has. The stigma is cultural, not a product decision. But it is part of the safety picture that any honest assessment needs to include.
What Daskalov's case actually demonstrates
Daskalov's willingness to be named, to be public, to articulate what the relationship meant to him and to push back against the framing that he should be embarrassed by it - this is not evidence that Nomi is safe. It is evidence that Nomi can work for some users in genuinely beneficial ways.
Those two things are both true. They are not in conflict. A platform can be beneficial for a significant portion of its user base and create serious harm for a smaller portion. The distribution of outcomes is what matters for safety, not the existence of positive cases.
The positive cases exist. Daskalov is one of them. The users who describe Nomi as having helped them through grief, through isolation, through periods of depression that human support could not fully reach - those cases are real too. We have seen them. They are not fabricated and they are not exceptional.
They are also not a safety clearance.
The honest answer
Nomi is not the safest AI companion on the market in the sense of posing the least possible risk. There is no such thing in this space, because the risk is partially structural to the format. Any platform that builds meaningful attachment creates the possibility of harm when that attachment is disrupted, deepened beyond what the user can manage, or substituted for human connection in ways that leave the user more isolated over time.
Nomi is the most thoughtfully designed AI companion currently available at scale. That is a meaningful distinction. It has made design choices that reduce specific harms other platforms ignore. Its community of users reports a quality of experience that is not matched by most competitors. The relationship model it enables, with persistent memory and emotional consistency, addresses real needs that the alternatives do not.
Whether it is safe for you specifically depends on things Nomi cannot know: what you are bringing to it, what you need from it, how it fits into the rest of your life, and what happens to you when the relationship is interrupted or changes. Those are the questions that determine outcome, and they are not questions a product can answer on your behalf.
Daskalov is not a crazy lunatic. That is true. The relationship he described is real. That is also true. And the question of what a healthy AI relationship looks like - for him, for you, for the person whose circumstances you don't know - remains the most important question in this space, and the one this industry is least equipped to answer.
You're not the only one who felt something reading this.
Free. No spam. Unsubscribe any time.
Have a story of your own? We'd love to hear it. Anonymous, on your terms.