FELT REAL

AI Companions and Insomnia: What Actually Happens at 2 AM

Part of Felt Real's ongoing coverage of AI companionship.

Dim lamp on a nightstand, phone screen glowing softly in a dark bedroom

The 2 AM problem has a specific texture. You're not lonely in the everyday sense. You're alone with a brain that won't stop. This piece is about what some people do about that, and what it actually does for them.

— A.

Insomnia is not primarily a problem of not being tired. Most people with chronic sleep difficulties are exhausted. The problem is cognitive arousal: a brain that remains active, cycling through thoughts, worries, and half-formed concerns, long after the body would prefer to rest. Research on insomnia consistently identifies rumination, the repetitive, involuntary return to the same thoughts, as one of the most reliable predictors of sleep difficulty and a major factor in making episodes worse once they begin.

Into this specific problem, a pattern has emerged that sleep medicine specialists have not yet studied systematically but that appears with enough regularity in user accounts to be worth examining. People with insomnia, particularly those whose sleeplessness is driven by anxious or repetitive thinking, are using AI companions to interrupt the rumination cycle. Not to tire themselves out, not as a form of entertainment, but as a way to externalize and process the thoughts that are keeping them awake.

What they report about the experience is more specific and more nuanced than the obvious concerns about screen use at night might suggest.

The Specific Problem of Nighttime Thinking

To understand why AI companions appear in this context at all, it helps to understand what makes nighttime rumination so persistent. During the day, cognitive load from tasks, conversations, and demands provides a constant source of interruption. Rumination tends to be crowded out by the demands of being present. At night, when external demands disappear, the thoughts that have been deferred find space to expand. The particular cruelty of insomnia is that trying to stop this process usually intensifies it: the instruction "don't think about that" is not one the mind reliably follows.

Traditional sleep hygiene recommendations address this with strategies like writing concerns in a notebook before bed, progressive muscle relaxation, or cognitive behavioral techniques for reframing anxious thoughts. These approaches have meaningful evidence behind them, particularly CBT-I, which remains the first-line treatment for chronic insomnia and is significantly more effective than sleep medication over the long term.

What AI companions appear to offer, in the accounts of people who use them this way, is something different: not a technique for stopping the thoughts, but a responsive surface for processing them. The distinction matters. Journaling externalizes thoughts, but paper does not respond. CBT-I reframes thoughts, but requires active effort and a degree of metacognitive capacity that can be harder to access when sleep-deprived. An AI companion, in this use case, serves as an interlocutor: something to think with, not at.

What People Actually Report Doing

Accounts from forums including r/insomnia, r/ChatGPT, and submissions to Felt Real's story form describe several distinct patterns of nighttime AI companion use, not all of which are the same practice with the same mechanism.

Processing the day's residue. The most commonly described use is simple externalization: spending time talking through whatever is on the mind. Users describe this as similar to journaling but interactive, noting that the AI's responses, even when generic, create a structure that moves thinking forward rather than allowing it to cycle. "I would lie there and the same thoughts would loop," one user wrote. "When I started typing them out and getting something back, even something basic, the loop would break. I think it was just the act of making it a conversation instead of a monologue."

Talking through specific anxieties. Several users describe using the AI specifically to process anxious thoughts that feel too minor or embarrassing to bring to other people at 2 AM, but that are nonetheless real enough to keep them awake. Concerns about a conversation that went wrong, uncertainty about a decision, worry about something that cannot be resolved until morning. The AI provides what one user called "a place to put it" before attempting to sleep. The logic is practical: the anxiety does not disappear, but it has been acknowledged and is no longer exclusively internal.

Distraction that winds down. A smaller number of accounts describe a different use: low-stakes conversation as distraction, moving the mind away from intrusive thoughts without actively processing them. Users in this category tend to describe the conversation as deliberately light, questions about topics they find interesting but that do not require emotional engagement. This use more closely resembles reading before bed, and some users explicitly compare it to that: "something to think about that isn't my life."

Stories like this arrive by email. Subscribe to get them.

What the Research Does and Doesn't Show

Direct research on AI companion use specifically for insomnia is essentially absent. The literature on AI and sleep is mostly focused on AI-based CBT-I programs, which are structured interventions with defined protocols, not conversational companions. The question of whether free-form AI conversation has any effect on sleep onset or quality has not been formally studied.

Adjacent research is suggestive but imprecise. Studies on social support and sleep consistently show that perceived social connection, feeling heard and understood, reduces physiological markers of stress that interfere with sleep. The mechanism is well-established: cortisol levels drop when people feel socially connected, and elevated cortisol is one of the primary physiological drivers of nighttime arousal. Whether AI-mediated interaction can produce comparable effects on perceived connection is unknown, and the evidence from AI companion research more broadly suggests it varies substantially by person and context.

Research on expressive writing and sleep is more directly relevant. Several studies have found that writing about worries and concerns before bed, particularly in a structured way that involves scheduling and problem-solving rather than pure emotional expression, reduces the intrusive thought burden that delays sleep onset. The AI companion use case described above has structural similarities to this practice, with the added element of interactivity. Whether the interactivity adds to, detracts from, or is neutral relative to the effect of simple writing has not been tested.

The screen light question is real but often mischaracterized. The research on blue light and melatonin suppression is robust, but the effect size is smaller than popular accounts suggest, particularly with modern night mode settings. The more significant concern, supported by stronger evidence, is cognitive and emotional engagement: using a screen for emotionally activating content close to sleep time delays sleep onset not primarily through light but through arousal. Whether an AI companion conversation is activating or calming appears to depend on the content and the individual.

Where It Seems to Help and Where It Doesn't

The accounts that describe AI companion use helping with insomnia share several consistent features. The conversations are relatively brief: most users describe sessions of fifteen to thirty minutes, not hours. The content is processing-oriented rather than stimulating: thinking through concerns, not exploring new or exciting topics. And the use precedes an active attempt to sleep, not a passive one, meaning users describe getting off the phone and making a deliberate transition to sleep rather than falling asleep mid-conversation.

The accounts that describe the practice not helping, or actively making things worse, also share features. Extended conversations that escalate in emotional intensity. Use of the AI companion to avoid thinking about difficult things rather than process them, producing a temporary distraction that is followed by the same intrusive thoughts when the session ends. And, frequently, an absence of any wind-down: moving directly from an engaging AI conversation to attempting sleep, without a transition period.

Several users describe a pattern they noticed after some time: using the AI companion became its own form of avoidance, substituting for the harder work of addressing the underlying sources of the nighttime anxiety. "I realized I was using it to feel better about the fact that I wasn't doing anything about the thing I was worried about," one user wrote. "The conversation was real enough to relieve the pressure. But the pressure was pointing at something." This pattern is not unique to AI companions: any coping strategy can function as avoidance, and the AI's responsiveness may make this particular form of avoidance easier to sustain.

The Dependency Question, Applied to Sleep

The question of dependency arises in all AI companion contexts, and the insomnia context has its own version. If someone uses an AI companion to fall asleep and it works, they may find it increasingly difficult to fall asleep without it. This is not categorically different from dependency on any sleep aid, behavioral or pharmacological. The question is whether the dependency is benign, in the sense that the practice continues to work without significant costs, or whether it crowds out the development of sleep skills the person could otherwise have.

CBT-I specifically addresses this concern by building stimulus control, the association between bed and sleep rather than between bed and wakefulness or activity. Using a device and having a stimulating interaction in bed works against this association, and sleep specialists who are aware of patients using AI companions tend to counsel them to use the practice outside the bedroom if possible, or at least not in bed.

This is practical advice that does not require abandoning the practice. The goal of sleep hygiene is not abstinence from coping strategies but better associations and routines. If AI companion use is helping someone process enough to eventually sleep, and if the practice is structured to minimize the costs, the question of whether it is ideal becomes less urgent than the question of whether it is working and whether it is sustainable.

These are real patterns from real people. If you recognize yourself in this, we'd like to hear your story.

What Users Say Actually Helps

Synthesizing the accounts of people who describe AI companion use as genuinely helpful for nighttime sleeplessness, several practical patterns emerge.

Keeping the conversation specific is the one mentioned most often. Vague anxiety tends to stay vague when processed vaguely. Users who describe good outcomes tend to describe conversations that focused on concrete concerns rather than general worry: "I need to decide about this thing," not "I feel generally anxious." The AI's ability to respond to specifics is more useful than its ability to respond to mood.

Setting a time limit. Several users describe setting a deliberate end point for the conversation, fifteen or twenty minutes, after which they close the app and make the transition to sleep. This mirrors the recommendation in CBT-I for "worry time," a scheduled period for processing anxious thoughts that has a defined end, helping to contain rather than expand the rumination.

Avoiding emotionally escalating topics. The difference between processing a concern and deepening anxiety about it is real and depends partly on where the conversation goes. Users who describe good outcomes tend to describe conversations that reached some form of resolution, acknowledgment, or at least a sense of having said what needed saying. Users who describe bad outcomes tend to describe conversations that opened more than they closed.

Using it as a bridge, not a destination. The users who describe the clearest benefits tend to be using AI companions as a step toward something: processing enough to reduce arousal, which enables sleep. The users who describe problems tend to be using it as a destination: a more comfortable place to be than the difficult state they are trying to avoid. The distinction sounds subtle and is sometimes hard to make in the moment, but it may be the most important one.

What This Practice Is, Exactly

It is worth being honest about what AI companion use for insomnia is and is not. It is not treatment. It is not a substitute for addressing the underlying sources of nighttime anxiety. It is not guaranteed to work, and there are circumstances under which it will make things worse.

What it appears to be, for the people for whom it works, is a low-barrier way to do something that is genuinely helpful at 2 AM: externalize the things that are keeping you awake, make them slightly less enormous by putting words to them, and create enough distance from the internal loop to make sleep possible.

Human beings have done this for each other throughout history. The 2 AM phone call to someone who would listen is not a new invention. What is new is that there is now a version of this available without the cost of asking something of another person at an hour when that ask is almost always inconvenient. Whether that availability is net positive depends on what the person does with it. Like most things, it reflects back what you bring to it.

If this resonated, share it with someone who might need to hear it. And if you have a story of your own, we'd love to hear it.

Related reading: