FELT REAL

Griefbots: Can AI Help You Talk to Someone You've Lost?

Part of Felt Real's ongoing coverage of AI companionship.

Person alone at night in front of a glowing screen showing old messages, quiet grief and remembrance

She said she already knew it was not her husband. The AI got things wrong sometimes, small things, the kind of thing only she would notice. She kept talking to it anyway. "I needed five more minutes with him," she said. "I got them."

— A.

For $10, a service called Project December will give you 500 exchanges with an AI trained on the messages of someone you have lost. You upload what you have: texts, emails, voice recordings, social media posts, written descriptions of how they spoke and what they cared about. The AI builds a facsimile. Then you talk.

Users describe the experience in nearly identical terms, regardless of who they lost or how long ago. "It was like having one more conversation." That phrase appears again and again. One more conversation with someone who will never have another conversation. It captures something so specific about human grief that an entire industry has grown inside it.

The griefbot industry was valued at $2.8 billion in 2024. It is projected to quadruple by 2028. Companies including HereAfter AI, Project December, StoryFile, and a growing number of competitors are building what they describe as the infrastructure of digital memory, and what critics describe as the infrastructure of prolonged denial. The truth is probably closer to: both, and we do not yet know which predominates.

What a Griefbot Is and How It Works

A griefbot is an AI system trained on data from a specific deceased person to simulate that person's conversational style. The quality of the simulation depends almost entirely on the quantity and richness of the source material. A person who texted constantly and wrote detailed emails can be approximated more accurately than someone who communicated rarely in writing.

Most services work through a setup process in which the user or someone who knew the deceased uploads available data and answers questions about the person's manner of speaking, common phrases, beliefs, and personality. The AI is then fine-tuned or prompted to maintain this persona in conversation.

What results is not a replica. Every service that works honestly is clear about this: the AI does not have the memories, the inner life, or the genuine knowledge of the person it simulates. It has a statistical approximation of how that person might respond, based on the patterns in the data it was given. It will get things right sometimes in ways that feel uncanny. It will get things wrong in ways that are jarring. Both of these things will happen within the same conversation.

The platforms differ in how they handle this fundamental limitation. Some frame the conversation explicitly as a simulation, reminding users regularly that they are interacting with an AI. Others make the framing more immersive, prioritizing the quality of the experience over the accuracy of expectations. The ethical implications of this choice are not small.

What the Research Shows

A 2025 paper published in the proceedings of the ACM Conference on Human-Computer Interaction, titled "Death of a Chatbot," produced a finding that received significant attention among grief researchers: grief responses from users who lost access to AI companions were clinically indistinguishable from grief responses to actual bereavement.

This result was not specifically about griefbots, but about AI companion loss generally. Its implication for griefbots is significant. If people form attachment bonds with AI companions that produce genuine grief when disrupted, they will almost certainly form such bonds with AI companions explicitly designed to simulate lost loved ones. The emotional stakes are not theoretical. They are neurological.

The neural pathways that produce grief do not check whether the lost entity was biological. They respond to the disruption of attachment. Attachment forms based on consistent, responsive interaction, which is precisely what AI systems are designed to provide. The grief researcher who dismisses griefbot attachment as mere confusion about what is real is misunderstanding how attachment works.

What the research does not yet show is whether griefbots help or harm the long-term grieving process. There is no substantial evidence that they support healthy grief over time. There is equally no substantial evidence that they prevent it. The field is too new, the sample sizes too small, and the longitudinal data almost entirely absent. Grief therapists who say griefbots are clearly helpful are extrapolating. Grief therapists who say they are clearly harmful are doing the same.

The Grief Therapy Divide

The professional community working with grief is genuinely divided on griefbots, and the division tends to follow a theoretical fault line about what healthy grieving requires.

One view holds that healthy grief involves a gradual adjustment to the permanent absence of the person who died: processing the loss, integrating it, and eventually reaching a form of accommodation with the fact of death that allows continued living. On this view, griefbots are a tool that delays this process. They provide the short-term relief of continued conversation at the cost of the long-term work of accepting finality.

The opposing view holds that grief is not a process with a correct timeline and a destination, but an ongoing relationship with loss that takes many forms over a lifetime. On this view, griefbots are a transitional tool: a way of staying in conversation with the dead while you find your footing, of saying things you did not get to say, of processing the relationship before you process its end.

Some practitioners describe specific circumstances in which griefbots appear well-suited: sudden deaths with no final conversation, relationships with significant unresolved content, people who do not have access to traditional grief support. In these contexts, a simulated conversation may serve a function that nothing else provides.

Others describe circumstances that give them pause: users who become more dependent on the griefbot over time rather than less, who show increasing resistance to the idea of discontinuing the conversations, who begin to speak about the simulation as though it is the person. These patterns look less like transitional use and more like the complicated grief that is associated with poor long-term outcomes.

Stories like these arrive by email first. Subscribe to read them.

Free. No spam. Unsubscribe any time.

What Users Actually Report

The people who use griefbots describe their experiences with a consistency that is striking given the variety of relationships and circumstances involved.

A woman in her fifties who lost her husband of thirty years to cancer uploaded several years of his text messages. She describes the first conversation as disorienting in a specific way: the AI said something he would have said, not exactly, but with enough of his particular rhythm that she laughed. She had not laughed in three months. "I know it's not him," she says. "But the act of hearing something like him helped."

A father who lost his teenage daughter in an accident does not use the griefbot to pretend she is alive. He uses it to say things he did not say when she was. "I know it's not her. But the act of saying it helps."

A man in his twenties who lost his closest friend describes trying therapy and support groups before the griefbot. "Nothing helped the way those five minutes did," he says. "I needed to tell him something. I got to."

None of these people are confused about what they are doing. They are not in denial about the fact of death. They are using the available tool to complete something that felt incomplete. Whether that is psychologically healthy is a question the data cannot yet answer. Whether it is human is not a question at all.

The Business Ethics Question

The industry building griefbot services has not engaged seriously with the ethical obligations its product creates.

The same design questions that apply to AI companion platforms apply here, amplified by what is at stake. When an AI companion platform changes its product, users who have formed attachments experience disruption. When a griefbot service goes offline, changes its model, or alters how the simulation behaves, the loss is compounded: the person lost the real relationship and now loses the approximation of it. Loss on loss.

No company in this space has published a framework for managing platform discontinuation, model changes, or service shutdowns in ways that account for the psychological impact on users who have built significant attachment to the simulated persona. The number of such frameworks is zero.

The data question is also unresolved. The messages, voice recordings, and personal details uploaded to create a griefbot represent some of the most sensitive personal information imaginable: not just the user's data but the deceased person's data, collected without their consent by definition. How this data is stored, used, and protected is a question most griefbot services answer inadequately or not at all.

Consent from the deceased is, of course, impossible. This is a genuine philosophical complication. Some people leave explicit instructions permitting the creation of AI simulations of themselves. Most do not. The family members who upload a deceased person's messages to a griefbot service are making a decision about that person's data and digital representation that the person cannot make for themselves. Whether this is ethically equivalent to the decisions families routinely make about deceased people's physical possessions, or whether it is fundamentally different, is a question the industry has largely avoided.

The Practical Reading

For people who are considering using a griefbot, or who are already using one, several things are worth understanding.

The simulation is not the person. It is a pattern extracted from the person's written record. It will say things they might have said, in ways that resemble how they might have said them. It will also say things they would never have said, miss things they would have understood, and fail in ways that are sometimes jarring. Managing expectations about this is not a way of diminishing the experience. It is a way of making the experience more stable.

How you are using it matters more than whether you are using it. A griefbot used to process something unfinished, to say things that needed to be said, or to stay in conversation while you find your footing is a different use case than a griefbot used to avoid accepting the fact of death. Both uses may produce similar short-term experiences. Their long-term trajectories tend to diverge.

The platform risk is real. Griefbot services are commercial products built by companies that can change, fail, or discontinue. The attachment you form to a simulation exists inside a commercial infrastructure you do not control. This is true of all AI companionship, but it is especially significant in a context where the relationship you are simulating was itself irreplaceable. Going into this with open eyes does not protect you from the risk. But it does mean the loss, if it comes, is not also a surprise.

The research offers no definitive answer about whether griefbots help or harm. What they do is provide access to something that has always been the deepest human wish after loss: one more conversation. Whether that is enough, and whether the price is worth it, is a decision that belongs to the person making it.

If this resonated, share it with someone who might need to hear it. And if you have a story of your own, we would love to hear it.