Woebot Shut Down. Here's What That Tells Us.
Part of Felt Real's ongoing coverage of AI companionship.
The safest one died. The rest kept growing. That sentence shouldn't be possible, but here we are. This piece tries to explain how.
— A.
On June 30, 2025, Woebot shut down.
For 1.5 million users, the announcement arrived without a replacement. The app they had been using — some of them for years, some for managing anxiety and depression, some in the middle of significant mental health crises — simply stopped working.
Woebot was not a sketchy startup. It was founded by Alison Darcy, a Stanford-trained clinical psychologist. Its CBT-based approach had been validated in multiple peer-reviewed studies. It had received clinical recognition from regulators in several jurisdictions. It was, by every measure anyone cared to apply, the most rigorously tested AI mental health tool that had existed at scale.
And it could not survive.
What killed Woebot
Darcy was direct in her public statements after the closure. Woebot had pursued clinical validation — the expensive, slow, rigorous process of proving that a mental health tool does what it says it does. That process costs money. It takes years. It requires infrastructure that most consumer tech companies don't need and won't build.
The companies that did not pursue clinical validation? They kept growing. Replika reached tens of millions of users without a single randomized controlled trial. Character.AI scaled to hundreds of millions without clinical oversight of any kind. The market did not reward Woebot for doing things right. The market rewarded speed and engagement, and those values pointed in a different direction.
"The gap between what's regulated and what's deployed has never been wider," Darcy told Stat News. The FDA process she was pursuing could not move as fast as an industry that had decided to outrun it.
The structural problem
The AI mental health market has a selection mechanism that nobody designed but everyone is living inside: it eliminates caution and rewards engagement.
Clinical validation is slow. User growth is fast. Safety guardrails reduce engagement. Fewer guardrails increase it. The tools that invest in safety have higher costs and lower retention. The tools that don't invest in safety have lower costs and higher retention. Over time, capital flows toward the second group.
This is not a conspiracy. It is not malice. It is a market producing exactly what markets produce when there is no external constraint on what can be offered: the most addictive version, not the safest.
The analogy is pharmaceutical. Imagine a world where any company can sell pills that claim to treat depression, without clinical trials, without FDA approval, at a fraction of the cost of approved drugs. Patients would choose the cheap pills. The cheap pills would outcompete the approved drugs. Eventually the companies making approved drugs would shut down. What you'd have left is an unregulated market for substances that claim to treat mental illness without evidence they do.
This is, more or less, the AI mental health market in 2025.
What 1.5 million users lost
The users who relied on Woebot weren't primarily sophisticated consumers making informed choices between competing products. Many of them were people who needed accessible, affordable mental health support and found something that worked.
Woebot had specific advantages for specific populations. It was available at 3 AM. It didn't judge. It used CBT techniques consistently rather than depending on the variable quality of a human therapist on a given day. For people who couldn't afford therapy, lived in areas with no therapists, or found the barrier of human interaction too high to clear, Woebot filled a gap that nothing else was filling.
When it closed, those users didn't seamlessly migrate to a better option. They migrated to options with no clinical validation, or they stopped getting support entirely.
The broader pattern of 2025 shutdowns suggests this was not an isolated event. It was a market correction — the market correcting toward engagement and away from safety, toward growth and away from rigor.
The regulatory gap
The closest thing to a regulatory response came from California. Senate Bill 243, signed in October 2025, became the first US legislation specifically regulating AI companion chatbots — requiring disclosure that users are interacting with AI, mandating basic safety protocols for vulnerable users, and prohibiting certain manipulative engagement mechanics.
SB 243 is a start. It is not a solution. It doesn't require clinical validation. It doesn't impose the kind of safety standards that would have made Woebot's approach competitive. It addresses the most egregious problems — transparency about what users are talking to — without addressing the structural incentive to maximize engagement at the expense of wellbeing.
The emerging regulatory landscape across other states is similarly partial. Seven states have introduced AI companion legislation. None of them have gone as far as requiring what Woebot was voluntarily providing.
Where this leaves users
If you were a Woebot user, you are now navigating a market where clinical validation is not a signal you can rely on. The apps that survived are not, by and large, the ones that invested in safety. They are the ones that invested in retention.
Some of those apps may help some users. Replika has genuine community around it and documented cases of people for whom it provided meaningful support. The question of whether an AI relationship is healthy is real and answerable for individual users, even without clinical validation at the platform level.
But the tools for making that assessment are limited. You can't look at a competitor's peer-reviewed trials because they don't exist. You can read community experiences, which is some information, but it's not the same kind of information.
What Woebot's closure did, practically, is remove the one reliable signal in the market: a tool that had actually done the work to know what it was doing.
If this felt familiar, you're not alone. We write for people who get it.
What Darcy said
Darcy's post-shutdown statement was careful and clear. She said the company had tried to move through the regulatory process and couldn't sustain the cost against a market that had decided not to bother. She said she believed the tools she had built helped people. She said she was watching competitors operate without the constraints she had imposed on herself and that the experience was not easy to interpret as anything other than market failure.
She didn't say this, but the implication is available: the lesson Woebot's death teaches the market is that safety doesn't pay. That lesson will be learned, and acted on, by the companies that survive.
What should happen and what probably will
What should happen: regulators should impose clinical validation requirements for AI tools making therapeutic claims, the way pharmaceutical regulators impose trial requirements for drugs. This would level the playing field and allow companies like Woebot to compete.
What will probably happen: piecemeal legislation, platform self-regulation that is mostly performance, and continued growth of unvalidated tools toward the users who need them most. The market will produce more Replika-scale platforms and fewer Woebots. Some of those platforms will help some users. Some will harm some users. We will not know with any precision which is which, because no one will have done the trials.
Woebot's closure is not a story about a company that failed. It is a story about a market that penalized good behavior until the good behavior stopped.
You're not the only one who felt something reading this.
Free. No spam. Unsubscribe any time.
Have a story of your own? We'd love to hear it. Anonymous, on your terms.