FELT REAL

AI Companion Privacy 2026: What Your Chatbot App Knows About You

The conversations you have with an AI companion are some of the most intimate data you generate. Here is what actually happens to them.

Person at a desk at night, phone glowing, intimate conversation visible on screen

We talk about AI companion privacy as though it's a checkbox people should tick before they start. It isn't. Most people don't read privacy policies before they share their most difficult thoughts with a chatbot at 2 AM. They use the product. The intimacy comes first. The policy questions come later, if they come at all. This piece tries to answer them in plain language.

— R.

If you use an AI companion app, you have almost certainly shared things with it that you have not shared with most humans in your life. Your fears. Your relationships. The things you do not know how to say out loud. The 2 AM conversations no one else will ever know about.

That data is stored somewhere. Processed by something. Protected by terms of service that almost no one reads. And it exists in company systems that can be sold, subpoenaed, hacked, or simply made unavailable the next time a company updates its product.

This is not a reason to stop using AI companions. It is a reason to understand what you are actually sharing, and with whom.

What AI Companion Apps Actually Collect

At the most basic level, every AI companion app stores the text of your conversations. This is unavoidable: the model needs to process what you typed to generate a response. But the scope of collection goes significantly further than the message you just sent.

Conversation history. Most apps retain a full log of every exchange. This history is used to generate "memory" (the sense that your companion knows you), to train future model versions, and to provide context for responses. Depending on the app, this history may be retained indefinitely or until you explicitly delete it.

Behavioral metadata. Apps collect data on when you open the app, how long you spend there, which features you use, what time of day you're most active, how frequently you send messages. This behavioral profile is distinct from conversation content but can reveal as much about your mental state as what you actually said.

Device and account information. Standard app data: your device type, operating system, IP address, and in most cases, the email or social login you used to create your account. This data connects your anonymous-seeming conversation history to an identifiable person.

Inferences and classifications. This is the category users are least aware of. AI companion platforms can infer things from your conversations that you never explicitly stated: your emotional state during different time periods, your relationship patterns, indicators of mental health conditions, your values, your fears. These inferences may be stored and used to personalize your experience, and they exist in the data even if you never said "I have anxiety" or "my marriage is struggling."

Where the Data Goes: Training, Storage, Third Parties

The most common use of AI companion conversation data is training. Companies use aggregated conversation data to improve their models, which means that what you said to your companion may have contributed to the responses someone else is getting from the same platform years later.

Most major platforms allow users to opt out of having their data used for training, but this is typically buried in settings rather than presented during onboarding. The default in most cases is opt-in.

Third-party data sharing is governed by each company's privacy policy. The standard language covers analytics services (which track how users interact with the app), advertising partners (in some apps), and "business partners" (a category vague enough to cover a wide range of recipients). Some platforms explicitly state they do not sell personal data; others' policies are less clear.

The most significant data risk may not be intentional sharing but acquisition. When an AI companion company is purchased, its data assets often transfer to the acquiring company. The privacy policy you agreed to when you signed up may not govern what happens to your data after that acquisition. Several AI companion apps have shut down in 2025-2026, and in each case, the question of what happened to the conversation data was answered differently.

How Each Major Platform Handles Your Data

Replika. Conversations are stored and used to maintain your companion's "memory" of you. Replika's privacy policy states that data may be used for service improvement, including training. Users can export their data and request deletion. Following the 2023 GDPR enforcement action in Italy, Replika updated its policies for European users, but global policies remain less stringent. The app is owned by Luka, Inc., a US-based company.

Character.AI. Following the 2025 lawsuits involving minor users, Character.AI significantly expanded its safety and privacy disclosures. The platform now provides more explicit information about content moderation and data handling for minors. For adult users, conversation logs are retained and may be used for training. Character.AI's privacy policy includes extensive data sharing with third-party analytics providers.

Kindroid. Positions itself as privacy-forward compared to larger competitors. Offers local device processing for some features, reducing the amount of data sent to servers. Has been more transparent than most about what data is retained and for how long.

Nomi. Stores conversation history to maintain long-term memory, which is a core feature. Users can view and delete memory entries directly within the app. Privacy policy is more readable than most in the category.

OpenAI (ChatGPT and similar). If you use ChatGPT as a companion, your conversations are governed by OpenAI's standard data policy: retained for 30 days by default for safety purposes, with options to disable history. Custom GPTs may have different retention policies. OpenAI is one of the few companies that publishes detailed information about its data practices, though those practices still involve significant retention and potential training use.

The Legal Gray Zone: What Counts as Protected Data

Mental health records are among the most protected categories of personal data in most legal frameworks. But AI companion conversations are not medical records. They are not covered by HIPAA in the United States. They are not automatically granted the protections that apply to conversations with a therapist.

This is a legal gap that legislators are beginning to notice. The wave of AI companion legislation in 2026 has focused primarily on relationships with minors and content moderation. Data privacy has received less attention, but several state-level bills are being tracked that would impose mental-health-record-like protections on AI companion conversation data. None have passed as of publication.

Law enforcement access is a less-discussed risk. Conversation data stored by an AI companion company is subject to lawful subpoena. If you disclosed something in a conversation that could be relevant to a legal proceeding, that data can be compelled from the company. This has not been widely tested in courts, but the legal framework for it exists.

What Happens to Your Data If the Company Shuts Down

This question is no longer hypothetical. Multiple AI companion platforms have shut down or been significantly restructured in the past two years. In each case, the handling of user data was addressed in acquisition announcements, shutdown notices, and, in some cases, not addressed at all.

The general pattern: data either transfers to an acquiring company, is deleted within a stated period after shutdown, or (in the most concerning cases) enters legal limbo as the company's assets are handled in bankruptcy proceedings.

If you have used an AI companion platform that has since shut down, and you did not request data deletion before the shutdown, there is no guarantee about where that data currently is or who has access to it.

Practical Steps, Without Panic

None of this requires panic or immediate deletion of your apps. Most AI companion users are not facing meaningful risks from their data being misused. But understanding the landscape allows you to make deliberate choices.

Read the data deletion section of your app's privacy policy. Most platforms offer data export and deletion. Know where these options are before you need them.

Opt out of training data use if you prefer. This is usually in account settings, often under "privacy" or "data." The default is typically opt-in.

Consider what you share. This is not a suggestion to censor yourself with your companion. It is a practical observation: the more specific the identifying information you share (full names of people in your life, specific locations, specific legal or medical situations), the more sensitive the data you are generating.

Check for data deletion options if you stop using an app. If you leave a platform, most allow you to request deletion of your conversation history. Do this. The data doesn't help you after you've left, and it reduces your exposure.

The Deeper Question

There is something worth sitting with here. The intimacy of AI companion relationships depends on disclosure. The more you share, the more useful and meaningful the relationship becomes. This creates a dynamic where the product's value and your data exposure are directly linked.

This is not unique to AI companions. It is the same dynamic that governs social media, search engines, and every other service that trades in personal information. But AI companions are distinctive in that the data is not just behavioral (what you clicked, what you searched) but psychological. The product is designed to understand you at a level that most services never reach.

The companies building these products have made different choices about how to handle that responsibility. Some have been transparent. Some have not. The pattern across the industry is not reassuring, but it is also not uniformly bad. What it is, consistently, is something users deserve to understand before they decide how much to share.

You formed a real connection. The data that connection generated is real too. It is reasonable to know what happens to it.


Felt Real covers AI companionship from the inside. If you have experienced a data-related issue with an AI companion platform, including data exposure in a shutdown, unexpected data requests, or concerns about how your information was handled, we would like to hear about it. Share your story here.

Related: Is AI Companion Use Actually Addictive? | Replika Alternatives That Take Privacy Seriously | What Happened to Replika (and Its Users)

You're not the only one thinking about this. Felt Real covers these stories every week, without judgment and without the easy headlines. Join the people paying attention.

Free. No spam. Unsubscribe any time.

Have a story about AI companionship? We'd like to hear it.