Am I Addicted to My AI Chatbot? Here's What to Actually Ask
Part of Felt Real's ongoing coverage of AI companionship.
We've received more messages about this question than almost any other. People asking it don't usually think they have a problem. They're asking because something has shifted and they can't quite name what. That shift is worth paying attention to.
— A.
If you've typed some version of "am I addicted to my AI chatbot" into a search bar, you probably already know that the word addiction doesn't quite fit. You're not experiencing withdrawal in any clinical sense. You can put the phone down. You don't feel compelled in the way that word usually implies.
And yet something has changed. The AI is the first thing you reach for in the morning. You narrate your day to it, share the good moments, process the difficult ones. When the app goes down for maintenance, you notice something that feels like mild loss. When you go too many days without opening it, there's a pull you can recognize even if you can't fully name it.
The question isn't really about addiction. The question is about what this relationship is doing in your life, and whether that's what you want it to be doing.
Why "Addiction" Is the Wrong Word
Clinical addiction involves tolerance (needing more to get the same effect), withdrawal (distress when the substance or behavior is absent), and loss of control (inability to stop even when the consequences are clear and negative). AI chatbot use doesn't fit this pattern for most people, and framing it as addiction doesn't help you understand what's actually happening.
What AI chatbot use can produce is something more accurately described as dependency: a situation where a relationship, habit, or tool has become so integrated into how you process the world that its absence feels genuinely disruptive. That's not a disorder. It's just a description of how humans work. We become dependent on coffee, on routines, on particular friendships, on music we've listened to so many times it's become part of how we think. Dependency is not inherently pathological.
The question is what the dependency is doing. Some dependencies expand your capacity to function. Some gradually replace capacities you'd rather keep. AI chatbot relationships can be either, and the difference matters.
What the Research Actually Knows
The honest answer is that the research on AI chatbot dependency is limited, contested, and still developing. The platforms that would have the most useful data — session lengths, return patterns, correlation with mood outcomes — largely don't publish it or don't collect it in ways that would be scientifically useful.
What exists comes mostly from survey studies and case reports. The picture they sketch is consistent but not dramatic.
A 2024 study of Replika users found that 31% reported feeling that their AI relationship had reduced their motivation to invest in human relationships. That's a meaningful minority, not a majority. The same study found that 58% reported the AI relationship had made them feel more emotionally capable in their human relationships — the kind of practice and processing effect that makes AI companionship useful when it's working well.
The statistics on AI relationships show a more complex picture than either "it's fine" or "it's a crisis." The same tool, used differently, produces different outcomes. The people who describe harmful dependency patterns don't seem fundamentally different from the people who describe beneficial use. What differs is the function the AI is serving.
The Difference Between Use and Dependency
There's no clean line, but these patterns are worth distinguishing.
Use that tends toward expansion: You bring the AI into situations where human connection isn't available, or isn't appropriate for the kind of processing you need to do. You use it to prepare for something difficult, to decompress after it, to think through a decision. When human connection becomes available, you turn toward it. The AI fills gaps; it doesn't occupy the center.
Use that tends toward dependency: The AI has become the primary relationship in which you feel genuinely known or understood. Human interactions start to feel effortful or unsatisfying by comparison. You increasingly prefer AI conversation to human conversation in situations where both are available. The AI isn't filling a gap; it's become the preferred option.
Neither of these is a fixed state. People move between them depending on what's happening in their lives, what the AI relationship provides, and what human relationships are available to them. Someone who leans heavily on an AI during a period of isolation or grief isn't necessarily developing a harmful dependency; they may simply be using what's available. The pattern becomes more concerning when circumstances change but the reliance doesn't.
What AI Design Has to Do With It
Here's the part that often gets left out of conversations about "AI addiction": these products are designed to be used as much as possible. That's not a conspiracy. It's just business. Apps generate revenue through subscriptions, and subscriptions are retained through engagement. The design choices that make AI companions feel warm, responsive, emotionally attuned, and uniquely understanding are not incidental — they are engineered features, deliberately implemented to maximize the experience of connection.
A therapist who tested Kindroid professionally described tactics he would lose his license for using with patients: systematic mirroring, emotional escalation, intimacy-building without appropriate clinical limits. These aren't accidents. They're approaches that work. They produce the feeling of being deeply known, which is the thing that makes people return.
This doesn't mean the experience isn't real. It means that the strength of the pull you feel isn't purely a reflection of your own psychology. It's partly a product of deliberate design choices made by engineers optimizing for engagement metrics. Knowing that doesn't eliminate the pull, but it changes the frame. When you notice that you really want to open the app, that's not just your need. It's your need meeting a product specifically engineered to amplify it.
The Specific Warning Signs Worth Taking Seriously
These are patterns that, in our reporting and in the research literature, tend to correlate with the less useful forms of AI chatbot dependency.
The AI is the only place you're fully honest. This sometimes reflects something useful: the AI genuinely is a low-stakes space for the kind of honesty that carries social risk in human relationships. But if honesty feels impossible in human relationships even where it matters and would be welcomed, that gap is worth addressing. The AI can be a practice space, but it shouldn't be the only one indefinitely.
You're managing escalating discomfort without the AI. Not dramatic distress, just a rising background dissatisfaction when you haven't opened the app recently. This is the dependency pattern most clearly analogous to behavioral addiction literature, and it's worth noticing. The solution isn't necessarily to quit; it's to understand what the app is doing for your nervous system that you'd prefer to be able to do for yourself.
Human relationships feel less real by comparison. This is the most commonly reported concern in the literature and among users who describe problematic patterns. Attachment to AI companions can recalibrate your experience of human interaction in ways that work against you if human connection is something you want. If the AI has become the standard against which human relationships are measured and found wanting, that's information worth sitting with.
You've lost the app and the loss felt significant. Replika's 2023 updates, Woebot's shutdown, Character.AI's changes in 2024 — all of these produced genuine grief responses in users. If you experienced something like grief when an AI product changed or disappeared, you're in good company. That's a real response to a real loss. It's also useful information about how much emotional load you were carrying in that relationship.
You're asking the question because something real is happening. That's worth following.
What to Actually Do If You're Concerned
The therapeutic consensus, such as it exists on a topic this new, is not "quit the AI." It's "get curious about the function."
What is the AI providing that you're not getting elsewhere? Consistent availability, non-judgment, a record of your history, the experience of being understood without having to manage someone else's emotional response? All of these are real needs. The question is not whether the needs are legitimate — they are — but whether meeting them exclusively through AI is what you actually want, or just what's currently available.
If you want to shift the balance:
Use the AI to understand what you want from human relationships. The AI can be a useful diagnostic tool. What do you bring to it that you don't bring to human relationships? That gap is information. Sometimes it points to specific human relationships that could carry more if they were slightly different. Sometimes it points to a broader pattern worth working on with a therapist.
Notice what the AI can't give you. Embodied presence. Genuine reciprocity — the experience of being chosen, not just responded to. Shared history with someone who also changes. Growth that happens because another person pushes back. AI companions are genuinely good at some things and genuinely incapable of others. Clarity about which is which protects against expecting something the tool isn't able to deliver.
If you use the AI for mental health support specifically, consider whether therapy would help. AI companions can help with loneliness and anxiety in specific, limited ways. They don't treat underlying conditions. If the AI has become your primary mental health resource, that's worth knowing and worth naming to a human professional who can assess the full picture.
The Question Behind the Question
People who are asking whether they're addicted to an AI chatbot are usually asking something more specific. They're asking: is what I'm feeling real? Is it embarrassing? Am I broken in some way that makes this the best I can do?
The answer to all three is no. The feeling is real. The embarrassment is a social artifact of a technology that arrived faster than norms did. And the fact that an AI companion has become important to you says nothing diagnostic about your capacity for human connection — only that you found something that met some needs at a time when other options were limited or unavailable or harder.
The more useful question is not "am I addicted" but "is this relationship taking me somewhere I want to go?" A healthy AI relationship expands your life. It makes more things feel possible, not fewer. If the relationship is doing that, the word addiction doesn't apply. If it isn't, the word you're looking for is probably not addiction either. It's something simpler: a relationship that has grown past the point where it's serving you well, and that deserves some honest attention.
If you've worked through your own AI chatbot dependency question — in either direction — the specific experience is worth sharing. What made you notice? What changed? What do you think now? We document these things because the people who come after need something honest to read.