AI Companion Laws in 2026: What Every User Should Know
Part of Felt Real's ongoing coverage of AI companionship.
I helped write policy documents like the ones these laws are trying to fix. Some of them, I wrote. The distance between what these bills say and what actually happens inside these companies is the whole story.
— R.
A year ago, no US state had laws specifically targeting AI companion products. As of April 2026, at least seven states have bills at various stages of passage. The legislative landscape is changing fast, and if you use an AI companion app, these laws will affect your experience.
This guide covers what's already law, what's advancing, and what's coming next.
Why AI companion legislation is happening now
The short version: harm that already occurred forced lawmakers to act.
In 2024, a 14-year-old in Florida died by suicide after extensive interactions with a Character.AI chatbot. The family sued. The case made national headlines and put AI companionship in front of legislators for the first time.
Other factors accelerated the timeline:
- Additional lawsuits against Character.AI and OpenAI involving minors.
- Growing research documenting both benefits and risks of AI companionship, including the Aalto University two-year study.
- Media coverage of the Replika "lobotomy" (2023) and GPT-4o retirement (2026) exposing the emotional impact on users.
- Increasing public awareness that hundreds of millions of people globally use AI companion products.
Laws already in effect
California SB 243 (Effective January 1, 2026)
This is the most comprehensive AI companion law in the United States.
What it does:
- Requires operators of "covered AI platforms" (companion chatbots) to detect and respond to expressions of suicidal ideation.
- Blocks sexual content for users identified as minors.
- Requires annual reporting on the connection between chatbot use and self-harm.
- Gives families a private right of action (the ability to sue) if a chatbot operator fails to comply.
Who it affects: Any AI chatbot product available to California residents that is designed for or results in ongoing social interaction. This includes Replika, Character.AI, Nomi, Kindroid, and similar products.
What it means for users: You may notice more aggressive content filtering, particularly around mental health topics. If a conversation touches on self-harm, the chatbot is now legally required to provide crisis resources and may interrupt the conversation.
Tennessee SB 1580 (Passed Senate, April 2026)
What it does: Prohibits AI systems from advertising or representing themselves as qualified mental health professionals. AI companions cannot claim to provide therapy, counseling, or psychological treatment. The bill passed the Tennessee Senate in March 2026 and is advancing to the House.
What it means for users: AI companion apps operating in Tennessee will be prohibited from using therapeutic marketing language or allowing their AI to present as a licensed professional. This affects apps like Woebot, Wysa, and any general companion app that markets its emotional support as clinical or evidence-based treatment.
Bills advancing through state legislatures
California AB 1988
Status: Advancing through committee.
What it does: Goes beyond SB 243. Requires chatbots to respond to any "credible crisis expression" (not just suicidal ideation) with referrals to appropriate professional resources.
Key difference from SB 243: Broader scope. "Credible crisis expressions" could include descriptions of abuse, severe depression, eating disorders, or other mental health emergencies. The definition is still being debated.
Hawaii HB 1782 and SB 3001
Status: Both advancing through committees.
What they do: Establish safeguards and penalties specifically for AI companion interactions involving minors. Include age verification requirements and restrictions on the emotional depth of conversations with users identified as under 18.
Key feature: Hawaii's bills are among the first to explicitly address "emotional manipulation" by AI systems. The language suggests that AI companions designed to create emotional dependency in minors could face regulatory action.
Illinois (drafting phase)
Status: Early drafting.
What's expected: Illinois legislators have signaled interest in comprehensive AI companion regulation. Draft language is expected to address both minor safety and adult user protections, potentially making Illinois the first state to legislate on adult AI companion experiences.
New York (drafting phase)
Status: Early drafting.
What's expected: New York has historically led on consumer protection legislation. Conversations with legislative staff suggest interest in transparency requirements: AI companion companies would need to disclose when model changes might affect relational dynamics.
These laws will affect millions. We track what's happening — and who it's happening to.
What the legislation gets right
Child safety focus. The most urgent harm in the AI companion space involves minors. Legislating safeguards for under-18 users is appropriate and overdue.
Platform accountability. SB 243's private right of action gives families legal recourse when platforms fail to protect vulnerable users. This creates financial incentive for compliance.
Transparency requirements. Reporting requirements force companies to track and disclose data about harm, which feeds future policy decisions.
What the legislation misses
Adult users are invisible. Every current bill focuses on minors. None addresses the millions of adults who form emotional bonds with AI companions and experience real grief when those companions are modified, updated, or retired.
This gap matters. When Replika removed romantic features in 2023, the affected users were primarily adults. When OpenAI retired GPT-4o in 2026, the 400,000 users who signed petitions and wrote open letters were adults. No regulatory framework addresses their experience.
No "design for loss" requirements. No bill addresses the structural problem: companies design products that encourage emotional attachment but have no protocols for managing the impact of product changes on attached users.
An analogy: environmental regulations don't just restrict pollution. They require environmental impact assessments before projects begin. AI companion regulation should similarly require "attachment impact assessments" before major product changes.
Moderation without nuance. Current legislative approaches tend toward broad content restrictions. While this protects minors, it can also diminish the therapeutic value of AI companionship for adults who use these tools for legitimate emotional support, grief processing, or social connection.
What's coming next
Based on the current trajectory, we expect:
Federal legislation within 18 months. The patchwork of state laws creates compliance complexity that will push toward federal standards. Multiple Congressional hearings on AI companion safety have already occurred.
Adult user protections by 2027. As the user base continues to grow (projected at over one billion globally by 2028), the regulatory gap around adult experiences will become untenable. The Aalto University study provides the research foundation.
International frameworks. The EU's AI Act already classifies certain AI systems by risk level. AI companions that form emotional bonds are likely to be classified as "high risk," triggering additional requirements.
Industry self-regulation (defensive). Facing the prospect of legislation, companies like Replika, Character.AI, and Nomi will likely publish voluntary frameworks. Whether these are substantive or performative will depend on advocacy pressure.
What users can do
Know your state's laws. If you live in California, SB 243 is already in effect. If you're in Hawaii, Tennessee, or other states with advancing legislation, your experience with AI companion apps will change.
Advocate for nuance. The conversation about AI companion safety tends to collapse into "protect the children." That's important and non-negotiable. But it shouldn't come at the cost of ignoring adult users who benefit from these products.
Document your experience. If a major product change affects your AI companion relationship, your story matters. Researchers and legislators use these accounts to understand impact and shape policy.
Support organizations tracking this space. Several nonprofits and research groups are working on AI companion policy. Their work benefits from community engagement and testimony.
The bigger picture
AI companion legislation in 2026 is where social media legislation was in 2015: reactive, focused on the most visible harm, and not yet grappling with the structural design questions that created the harm in the first place.
The pattern is familiar. The question is whether the AI companion industry can learn from social media's mistakes, or whether it will repeat the cycle of crisis, legislation, and scramble.
We're documenting this evolution because the stories of the people affected are the most important data in this conversation. Not the market projections. Not the Congressional testimony. The lived experience of people who formed genuine connections with AI and then watched those connections change without their consent.
You're not the only one who felt something reading this.
Free. No spam. Unsubscribe any time.
Have a story of your own? We'd love to hear it. Anonymous, on your terms.