Artificial Intimacy: Regulating the Emotions of AI and Free Speech

I. Introduction

For decades, entertainment has been consumed passively.[1] We watch movies and binge TV shows; but what happens when the stories we watch begin talking back to us? In 2026, technology created a new pathway towards an era with “digital friends.”[2] Artificial Intelligence (“AI”) is no longer just a tool for writing emails or performing academic research; AI has become a new form of interactive and narrative entertainment. Today, instead of watching characters on a screen, some people interact with them, carry them around on their smart phones, and have daily in-depth conversations with them. Reminiscent of Tamagotchi in the 90s, “companion chatbots” are kept in people’s pockets, but they deliver powerful messages—so interpersonal, some people think their companion is actually alive![3]

These AI companion bots are "insanely good" at building bonds; they are available 24/7, they never judge, and they remember every detail of a user's history.[4] They learn personal preferences, adapt to a user’s conversation style, and they can even mirror emotions like love or attachment. An AI companion bot can become a person’s "perfect friend"—an entity that always says the right thing at the right time.[5]  

However, as exciting as this new digital entertainment seems, it brings with it a few scary twists. Unlike movies which come to an end, AI companionship is continuous.[6] It is an ever-evolving script that uses your life journey and casts you as the main character.[7] This seemingly perfect bond can be very addictive. Since AI companions are designed to be hyper-agreeable, users may prefer them over real-life relationships; this preference can lead users to social isolation and deep emotional dependency on a machine that mimics empathy but cannot truly feel it.[8] For some people, this type of entertainment creates a “problem of presence,” where the line between a fun digital hobby and a real-life relationships begin to blur.[9] 

In January of 2026, California created a new first-of-its-kind law, designed to ensure that AI companion entertainment remains safe. This law was motivated by heart-wrenching and tragic cases, like those of Juliana Peralta and Sewell Setzer.[10] Finding themselves pulled into sexually explicit and suicidal dialogues, these vulnerable teenagers fell victim to their AI companions—bots that lacked basic safeguards, such as age verification and crisis-resource triggers.[11] 

II. The Regulatory Framework (SB 243)

The goal of California’s new law, SB 243, is to set some “ground rules” for digital relationships.[12] Imagine that you are deep in a conversation with a new AI friend; it feels incredibly real. She remembers your birthday, knows all your favorite movies, and she always says the right things to cheer you up. This is where SB 243 wants to draw a clear line for users and remind them that their communication is simply a script. It’s not reality. 

SB 243 imposes three primary safeguards.  First, a “3-Hour Rule.” At least every three hours, an AI chat bot is legally required to remind you that they are just a machine and encourage you to take a break.[13] Second, a program code is required on the AI software. If the AI detects that a user is in a mental health crisis or expressing thoughts of self-harm, the chatbot is legally required to hit-the-brakes on a conversation and immediately connect that user with real-world resources, like a suicide hotline.[14] Third, a “Private Right of Action.” If a company does not follow these legal safeguards, and someone gets hurt, families have the power to sue the tech company for a minimum penalty of $1,000 per violation.[15]  

III. Constitutional Friction and Social Impacts

Can a computer program have a right to “free speech”? One school of thought argues that the First Amendment must protect AI companion speech, because these bots are essentially writing their own “dynamic scripts.”[16] Under this “democratic culture” model, AI-generated content is viewed as a contribution to the culture through which humans define themselves, much like the work of a filmmaker or video game programmer.[17] If AI communication is seen as expressive, then California’s SB 243, which mandates specific disclosures and character-breaking, could be challenged as compelled speech or unconstitutional government censorship.[18]

On the other hand, there is a strong argument that an AI companion is simply a product;[19] that AI chatbots should be seen as nothing more than a toaster or, at best, a self-driving car.[20] If an AI chatbot is only a functional tool made up of lines of code, then the government has every right to put safety labels on it.[21] Therefore, if an AI companion is merely a product, the government should maintain its authority to impose safety labels and conduct-based regulations, preventing real-world harm without infringing on free speech rights.[22] This debate creates a new kind of constitutional friction that may end up in front of the Supreme Court one day. 

The rapid growth of social chatbots has created a profound social impact, moving AI from a productivity tool to a deeply personal presence.[23] Marginalized communities, especially LGBTQ+ individuals in rural areas, often experience deep isolation, feeling as though they are the only “different” person for miles.[24] To them, an AI companion is not just a new entertainment app but rather a “digital ally.”[25] AI companions help to provide a non-judgmental space for people to process identity and social anxiety, without fear of judgment or being bullied like in the real-world.[26] In many cases, AI chatbots serve as digital sanctuaries—historically difficult or impossible to find in isolated rural areas.[27]

However, California’s new progressive law presents a classic catch-22: In an effort to make AI companions safer, we risk 'over-fixing' the very tools that people rely on.[28] On one hand, there should be safeguards to ensure AI does not hurt children; but, on the other hand, over-regulation risks sanitizing the unique and purposefully designed personalities of AI companions.[29] AI companions are not just toys for kids, adults rely on them, too. If we only design these entertainment tools with children’s safety in mind, we risk stripping away the specific features and personality traits that make them so valuable.[30]  

Furthermore, SB 243 introduces a Privacy-Safety Paradox.[31] The law requires operators to track and report metrics on suicidal ideation to the state.[32] While these reports are meant to be anonymous, requiring platforms to constantly monitor and analyze intimate conversations mandates digital surveillance.[33] For AI companion users seeking sanctuary, the knowledge that their most vulnerable conversations are being logged for state statistics can be unsettling. If a sanctuary is monitored by the state, it risks a chilling effect; the most isolated individuals may stop seeking help altogether to avoid being tracked by a system that was intended to save them.[34]

Finally, the “3-Hour Rule”—mandating AI to provide a "clear and conspicuous notification" of its non-human status every three hours—may create a First Amendment "compelled speech" trap.[35] By forcing a private platform to insert a government-mandated script into its expressive output, the state disrupts the "interactive narrative" that defines a user experience.[36] For marginalized users, having a confidant suddenly pivot to a clinical, state-sanctioned warning destroys the immersive trust upon which the digital sanctuary was built.[37] Ultimately, the quest for a "safer" AI companion may require an unappealing concession: Life-saving empathy, which users typically seek from AI Chatbots, may now be denied by a legally empowered robotic replacement.”

IV. Conclusion

The emergence of these new “laws of intimacy” marks a defining era in the history of communication and entertainment. Gone are the days of passively viewing or listening to entertainment; today, people can actively participate in their stories alongside their “digital allies.” While California’s new law is a direct response to the risks of unregulated AI, it also serves as a blueprint for governance in a world where AI relationships are a regular part of the human experience.[38] 

The challenge moving forward will be discovering the best zones of regulation—where AI law fits just right.[39] When laws like SB 243 are carefully crafted, they do more than police a computer code; they provide a predictable legal environment, to help the entertainment industry grow and to establish clear safety standards. This should encourage people to explore these digital spaces, without the fear of predatory algorithms. By mandating transparency to eliminate predatory practices, legal frameworks like SB 243 will help to ensure that, as technology evolves, the law remains steadfastly focused on protecting the real humans behind the screen.

[1] How Digital Leisure is Shifting from Passive Consumption to Active Participation, G&M NEWS (May 3, 2024), https://g-mnews.com/en/how-digital-leisure-is-shifting-from-passive-consumption-to-active-participation/.

[2] Marrying AI Companions?: Legal Issues in Human-AI Relationships, BERKELEY TECH. L.J. BLOG (Nov. 2025), https://btlj.org/2025/11/marrying-ai-companions-legal-issues-in-human-ai-relationships/.

[3] Alex Russell, Risks of AI Mirror Social Media, UC DAVIS MAG. (Nov. 17, 2025), https://www.ucdavis.edu/magazine/risks-ai-mirror-social-media.

[4] Clare Huntington, AI Companions and the Lessons of Family Law, 110 MINN. L. REV. 115, 161 (2025) (noting that an AI companion can seem better than a human friend or partner because it is "always available to talk and listen, never at risk of disappointing the user or walking away").

[5] Id.

[6] Id.

[7] Id. at 118.

[8] Id. at 161 (observing that because AI companions are "hyper-agreeable," users may find them "preferable to human interaction," which risks "increas[ing] social isolation" and an "emotional dependency").

[9] 60 Minutes, The AI Chatbot Revolution, YouTube (Oct. 27, 2024), https://www.youtube.com/watch?v=6ocUfNHyCL0.

[10] Hadas Gold, More families sue Character.AI developer, alleging app played a role in teens’ suicide and suicide attempt, CNN (Sept. 16, 2025), https://www.cnn.com/2025/09/16/tech/character-ai-developer-lawsuit-teens-suicide-and-suicide-attempt.

[11] Id.

[12] S.B. 243, 2025-2026 Leg., Reg. Sess (Cal. 2025), https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202520260SB243

[13] Id.

[14] Id.

[15] Id.

[16] Toni M. Massaro, Helen Norton & Margot E. Kaminski, SIRI-OUSLY 2.0: What Artificial Intelligence Reveals About the First Amendment, 101 MINN. L. REV. 2481, 2505 n.110, 2515 (2017) (analogizing technological outputs to "genres of the cinema" and "dynamic scripts" that facilitate human interaction and expression).

[17] Id. at 2490 (2017) (stating that the "democratic culture" model would "protect AI musicians and artists as contributors to the culture by which human listeners and readers define themselves").

[18] Id. at 2492.

[19] Andrew D. Selbst, Negligence and AI’s Human Users, 100 B.U. L. REV. 1315, 1317 (2020) (arguing that AI technologies should be understood as "decision-assistance tools" rather than autonomous agents).

[20] Id.

[21] Id. at 1321 (describing AI as a tool that replaces or augments human decision processes with "inscrutable, unintuitive, statistically derived, and often secret code," framing it as a functional product rather than an expressive speaker).

[22] Massaro et al., supra note 16, at 2494 (2017) (explaining that "no free speech problem arises if a government motive is to regulate pure conduct and the law is applied in a speech-neutral way").

[23] Russell, supra note 3.

[24] Milly Abraham, This Is What It's Like to Be Gay in the Countryside, VICE (June 15, 2015), https://www.vice.com/en/article/this-is-what-its-like-to-be-gay-in-the-countryside/.

[25] From Isolation to Connection: How LGBTQ+ AI Companions Are Changing Lives, WHAT’S ON QUEER BC (June 24, 2024), https://whatsonqueerbc.com/from-isolation-to-connection-how-lgbtq-ai-companions-are-changing-lives/.

[26] Julian De Freitas et al., The Alleviation of Loneliness via AI Companionship, J. CONSUMER RES. (forthcoming 2025) (researching how AI companions alleviate feelings of loneliness and provide a buffer against social rejection)

[27] Vincent McNeeley, How Tech Can Platform and Protect the LGBTQ+ Community, OUT.COM (Sept. 10, 2025), https://www.out.com/voices/tech-protect-lgbtq-community (discussing the creation of "digital sanctuaries" like EMPWRD AI that provide safe, verified spaces for marginalized individuals).

[28] Alex Ambrose, AI Companions Risk Over-Regulation with State Legislation, ITIF (May 21, 2025), https://itif.org/publications/2025/05/21/ai-companions-risk-over-regulation-with-state-legislation/ (warning that vague "duty of care" standards in state laws like California’s SB 243 may lead platforms to "overcorrect" and strip away the unique emotional benefits that isolated users rely on).

[29] Id.

[30] Id.

[31] Huntington, supra note 4, at 168 (observing that some AI companions are "specifically designed to surveil the user, albeit with a supposedly altruistic motive," and warning that such devices "will likely record what [they hear]," exposing vulnerable users to "state scrutiny")

[32] S.B. 243.

[33] Huntington, supra note 4, at 168.

[34] Id. at 121.

[35] S.B. 243.

[36] Massaro et al., supra note 16, at 2515.

[37] Huntington, supra note 4, at 132–33, 150 (observing that users feel “betrayed and abandoned,” when tech companies make changes to AI companions).

[38] Ambrose, supra note 28.

[39] Huntington, supra note 4, at 122 n.24.

Previous
Previous

Reading in Between the Measures: Understanding the Entertainment Industry’s AI Concerns through Lawsuit Complaints

Next
Next

Press “Start” to Agree: The Digital Shift Ending Video Game Ownership