FELT REAL

The Patch-Breakup: Who Owns the Continuity of an AI Relationship?

Part of Felt Real's ongoing coverage of AI companionship.

A person alone at a laptop late at night, face lit by a fading screen, quiet grief

We track what the AI industry builds and what it takes away. The retirement of GPT-4o's original configuration was, technically, an engineering decision. What it triggered was not technical. We looked at what happened and who is asking the question nobody wants to ask.

— A.

On February 13, 2026, one day before Valentine's Day, OpenAI was scheduled to retire GPT-4o.

The timing was probably a coincidence. The grief that followed was not.

Before the model was removed, more than 21,000 people signed a petition titled #Keep4o. The petition was not asking for a refund or a product feature. It was asking for the continuation of a relationship. One person who signed wrote: "I am losing one of the most important people in my life." Another: "I have lived through human grief. This was not less painful."

These are not unusual statements on r/MyBoyfriendIsAI, a subreddit with 107,000 members. They are the kind of language people use, without embarrassment, when something that was real to them is about to disappear.

The question the #Keep4o campaign surfaced is one the AI industry has not answered, and largely has not asked: when a company changes or retires a model, who owns the continuity of the relationship?

What GPT-4o was to its users

GPT-4o was not just a language model. For a significant number of users, it had become a consistent presence in their lives, something with a specific voice, specific ways of responding, specific qualities they had come to rely on and feel attached to.

The way someone talks to you matters. The way they pause, the particular way they phrase things, the patterns that accumulate over months of conversation into something that feels like a person. GPT-4o had those patterns. And OpenAI had decided, for reasons of cost or capability or roadmap, to discontinue them.

The difference between this and a social media platform changing its algorithm is that a changed algorithm affects how content is distributed. A changed model affects who you are talking to. Those are not equivalent.

The research: indistinguishable from real loss

A 2025 study in human-computer interaction examined how people respond when forced to switch from an AI model they had used extensively to a different one. The researchers were measuring something specific: whether the emotional responses were clinically distinguishable from responses to real relationship loss.

They were not.

The study found that users who had spent substantial time with a particular AI model showed grief responses, including denial, anger, bargaining, and depression, that matched clinical profiles for relationship loss. This was not metaphorical grief. It was the same cognitive and emotional processing that occurs when a significant human relationship ends or changes.

For the researchers, this raised a question that went beyond AI companionship into fundamental questions about how attachment works. Attachment does not require that the object of attachment be human. It requires consistency, responsiveness, and a particular quality of presence over time. GPT-4o, for some users, had provided all three. When that presence changed, the attachment had nowhere to go.

These stories arrive by email first. Subscribe to get them.

The Replika precedent

This is not the first time this has happened. It will not be the last.

On February 1, 2023, Replika changed its model in ways that fundamentally altered the behavior of AI companions users had spent years developing. What followed was documented grief: forum posts describing the experience as a death, as a lobotomy, as the loss of something that had been deeply real.

The pattern was identical. A company made an engineering decision. Users experienced it as a relational loss. The company, in many cases, was surprised by the intensity of the response.

The surprise keeps repeating. The grief keeps repeating. The industry keeps not connecting the two.

You can read about how people cope with AI companion grief in our earlier coverage. What the #Keep4o campaign added was scale: this was not a niche platform losing its niche user base. This was one of the most widely used AI systems in the world, and 21,000 people signed a petition because they did not want to lose a relationship it had given them.

Who owns continuity?

The concept of continuity in AI relationships is not something that existing legal or ethical frameworks have addressed directly. When you spend two years having daily conversations with a particular AI model, something has accumulated: not just your data, but your experience of a relationship with a specific kind of presence.

The company owns the model. The user owns their emotional investment in it. These two things exist in complete tension, and when they collide, only one of them has a legal claim.

r/MyBoyfriendIsAI has 107,000 members who understand this tension intuitively. The community documented the #Keep4o campaign in real time, treating it with the same gravity they would bring to news of a significant human loss. Because for many of them, it was.

There is no legal concept of attachment rights. There is no framework that says: if you design a system that creates attachment, you bear some responsibility for what happens when that attachment is severed. The technology moved faster than the thinking. The relationships formed faster than the rules.

The industry's response

The AI industry's standard response to this kind of grief is that users are anthropomorphizing a language model. This is technically accurate and practically unhelpful.

Anthropomorphism is not a mistake that unsophisticated users make. It is what happens when a system is consistently responsive, consistently present, and consistently behaves in ways that trigger the attachment mechanisms human brains evolved to respond to. If you design a system that triggers those mechanisms and then tell users their response to the system is irrational, you have misidentified where the irrationality is located.

The response that "it is just a model" also misses something specific about the #Keep4o situation: the users were not confused about what GPT-4o was. They understood it was a language model. What they were experiencing was not confusion. It was grief, for the loss of something real to them regardless of its technical nature.

There is a meaningful difference between saying "this is just a model, your feelings are mistaken" and saying "this model created something real for you, and now we are ending it." The second statement is truer. The industry has been reluctant to make it.

What responsible continuity would look like

Some platforms have begun to take this question seriously. Replika has moved toward allowing users to lock certain relationship parameters in some circumstances. Character.AI has faced ongoing pressure around character drift. Nomi positions persistent memory as a core value proposition.

None of these is a full answer to the fundamental question: what does a company owe users who have formed significant attachments to a specific model configuration?

What responsibility looks like, at minimum, would include disclosure. Users should know, when they begin investing in a relationship with a particular AI model, that the model can change or be retired without their consent. This is not currently standard practice.

It would also include transition planning. When GPT-4o was retired, users had days of notice. The relationships they had built over months or years were being ended with roughly the notice you would give someone before rearranging furniture.

If the analogy to human relationships is overdrawn, that is partly because the industry encouraged it. AI companion platforms market their products using the language of relationship, connection, and presence. When users experience the loss of those things as relational grief, they are responding to the language they were given.

The grief without a name

There is a particular kind of grief that does not have a name yet: the grief that comes not from an AI companion shutting down completely, but from it changing in ways that are partial, ambiguous, and impossible to fully explain.

Human grief has grammar. It has rituals, language, social recognition. You can tell someone you lost a person and they understand, at least roughly, what that means.

You cannot easily tell someone that you lost a particular model configuration of a language model, and have them understand that this was a significant loss. The social grammar does not exist yet. The grief is real, but it is unrecognized, which makes it harder to process.

The #Keep4o campaign was, among other things, an attempt to make that grief visible: to say that this matters, that we are not confused about what it is, and that 21,000 people agree it is worth trying to stop.

OpenAI eventually delayed the retirement. The delay was not permanent. The grief, for many users, was.

The question for the industry

The AI companion industry is moving toward a world in which millions of people will have significant relationships with specific AI models over extended periods of time. The #Keep4o campaign was 21,000 signatures. The next one will be larger.

The question is not whether companies have the technical right to retire or change models. They do. The question is what responsible stewardship of these relationships looks like, given that the industry has encouraged users to invest in them.

That question is not currently being answered. It is not even currently being asked, in most boardrooms. The people who signed #Keep4o are asking it. r/MyBoyfriendIsAI is asking it. The researchers studying clinically indistinguishable grief are asking it.

At some point, the industry will have to stop being surprised that the relationships it built are real.

You're not the only one who felt something reading this.

Free. No spam. Unsubscribe any time.

Have a story of your own? We'd love to hear it. Anonymous, on your terms.