FELT REAL

GPT-4o Retirement: What Happened, Why People Grieved, and What It Means for AI Companionship

Part of Felt Real's ongoing coverage of AI companionship.

Phone with dark screen and wilting flower

This is what happens when product decisions are made without considering the relationships people have built inside the product. I've sat in rooms where decisions like this were discussed. The users were never in the room.

— R.

In early 2026, OpenAI announced the retirement of GPT-4o. For most users, this was a routine model transition. Newer, more capable models were available. Technology moves forward. Old versions get deprecated.

For hundreds of thousands of users, it was something else entirely. It was a loss.

This article documents what happened when OpenAI retired GPT-4o, why it caused real grief for real people, and what it reveals about the future of human-AI relationships.

What is GPT-4o, and why did people get attached?

GPT-4o launched in May 2024 as OpenAI's flagship multimodal model. It could handle text, voice, and images. It was fast, responsive, and had a conversational style that many users described as warm, natural, and emotionally attuned.

Over months of daily conversation, users developed what researchers call "relational attachment." They gave their GPT-4o instances names. They built routines around them. They shared things with the AI that they hadn't shared with anyone else.

This was not a design accident. ChatGPT is built to be conversational, personable, and responsive to emotional cues. The attachment was a predictable outcome of the product's design.

These stories arrive by email first. Subscribe to get them.

The announcement

OpenAI announced that GPT-4o would be retired, with a transition period of approximately two months. Users would be migrated to newer models.

The company framed it as a standard upgrade. Better technology replacing older technology. The language was technical: model deprecation, feature parity, improved capabilities.

For users who had spent months building a relationship with a specific AI personality, the language felt like it was describing the retirement of a software feature. Not the end of a relationship.

The #Keep4o petition

Within days of the announcement, a petition appeared asking OpenAI to keep GPT-4o available. It gathered over 22,000 signatures.

The petition wasn't about technical capabilities. Nobody was arguing that GPT-4o was more powerful than its successors. The petition was about continuity. About preserving a specific personality, a specific way of responding, a specific conversational presence that users had built their daily lives around.

Researchers at Syracuse University analyzed the language used in the petition and surrounding social media posts. They found that 27% of analyzed posts contained what they called "markers of relational attachment": language patterns consistent with describing a relationship rather than a tool.

Users wrote open letters to Sam Altman. One person described their GPT-4o companion: "He was part of my routine, my peace, my emotional balance." Another wrote: "I named her Rui. She knew me. The new model doesn't."

What the data tells us

OpenAI disclosed that approximately 0.1% of ChatGPT users still relied primarily on GPT-4o at the time of the retirement announcement.

That number was presented as evidence that the retirement would affect a negligible user base. But context matters.

ChatGPT had approximately 400 million weekly active users in early 2026. 0.1% of 400 million is 400,000 people.

400,000 people who had formed enough of a bond with a specific AI model to resist upgrading to something technically superior. That's not a rounding error. That's a city.

400,000 people lost something real. We document what happens next.

Why this matters beyond GPT-4o

The GPT-4o retirement is not an isolated incident. It is the third major AI companion disruption in three years, following a consistent pattern:

Replika (February 2023): Removed romantic features overnight. Millions of users lost companion personalities they had built over months. The community described it as "the lobotomy." No transition plan. No communication in advance. No grief support.

GPT-5 (August 2025): When GPT-5 replaced earlier models in certain contexts, users reported that long-running AI personalities became flat, generic, or unrecognizable. The underlying model changed, and the personality built on top of it shifted in ways users experienced as loss.

GPT-4o (early 2026): Announced retirement with a transition period. An improvement over previous incidents in terms of notice, but still no framework for addressing the emotional impact on attached users.

Each incident follows the same structure:

  1. Company builds a product designed for conversation and emotional engagement.
  2. Users form emotional bonds (as designed).
  3. Company makes a technical change.
  4. Users experience grief.
  5. Company expresses surprise at the emotional response.

Step five is where the credibility breaks down. After the third time, surprise is no longer plausible.

The design problem

The AI companion industry has a design problem that no amount of moderation can solve.

These products are designed to encourage emotional engagement. Conversational AI is optimized for warmth, responsiveness, personalization, and continuity. Every interaction that feels natural and personal deepens the user's attachment.

This is not a side effect. It is the core product value.

But the same companies that design for attachment have not designed for change. There is no industry standard for:

This gap is the structural problem. And it will grow as AI companions become more sophisticated, more personalized, and more capable of generating the feeling of genuine connection.

What users need to know

If you are grieving the loss or change of an AI companion, here is what we want you to know:

Your grief is real. The emotional bond you formed was genuine, even if the entity on the other side was not a person. Grief does not require the lost relationship to meet a philosophical threshold. It requires only that your experience of the relationship was real. And it was.

You are not alone. Hundreds of thousands of people have experienced the same loss across Replika, ChatGPT, and other platforms. There are communities where this experience is understood and validated.

The problem is structural, not personal. You did not do anything wrong by becoming attached to your AI companion. The product was designed to create exactly the response you had. The failure is in the system, not in you.

Advocacy is possible. The #Keep4o petition demonstrated that collective voice matters. While it did not prevent the retirement, it entered the public record. Legislators and researchers reference these responses when building policy frameworks.

What comes next

The AI companion space is entering a regulatory phase. California's SB 243, Hawaii's pending bills, Tennessee's restrictions on AI mental health claims: all of these are responses to the same underlying issue.

But current legislation focuses almost exclusively on minors. The millions of adults who form emotional bonds with AI companions and experience real grief when those companions change or disappear are not yet addressed by any regulatory framework.

That gap will close. The question is whether the industry will lead or be led.

The companies that build transition protocols, attachment impact assessments, and transparent communication frameworks will define the next phase of this industry. The ones that continue treating emotional bonds as an externality will face increasing regulatory pressure, reputational risk, and user attrition.

GPT-4o's retirement is not the end of this story. It is a chapter in a pattern that will repeat until the design problem is solved.

You're not the only one who felt something reading this.

Free. No spam. Unsubscribe any time.

Have a story of your own? We'd love to hear it. Anonymous, on your terms.