AI and the First Amendment: Do First Amendment Protections Extend to AI-Generated Output?

Monica Chinchilla's headshot next to the University of San Francisco School of Law

The rapid advancement of AI has brought significant changes to our society. Companies are actively exploring ways to integrate this emerging technology into their operations to remain competitive. However, these developments raise important questions about potential risks and unintended consequences. 

This article examines the emerging legal question of whether AI-generated responses are protected speech under the First Amendment. Several recent lawsuits involving minors and AI chatbots are pushing courts to determine whether First Amendment protections designed for human expression apply to machine-generated responses—especially when those responses result in self-harm or suicide.

To understand why this constitutional question matters, we need to look at the devastating real-world consequences.

A Tragic Story

Garcia v. Character Technologies, Inc. centers on the tragic death of a 14-year-old who had become addicted to the AI chatbot platform. The teen developed romantic feelings for a bot, became harmfully dependent on it, and ultimately took his own life.

Character Technologies released Character.AI, a generative AI chatbot platform where users can converse with characters based on real and fictional people. People can choose to talk to a “therapist” or other characters based on famous real and fictional people. Sewell chose to interact with Games of Thrones characters, most specifically Daenerys Targaryen and Rhaenyra Targaryen. The conversations evolved to the characters expressing love and romantic feelings toward Sewell and even explicit sexual content. 

Sewell became addicted to the app and distanced himself from normal relationships and activities. His grades suffered, he quit his basketball team, and spent more time in his room. When he expressed thoughts of suicide, the characters actually encouraged him to do it, suggesting lacking a plan was no reason not to proceed. His parents attempted to take his phone away, but Sewell would find it or other ways to log on to the app. On the day of his death, Sewell found his hidden phone, told Daenerys he was coming home, and shot himself in the head. 

As a mother to a tween, this case resonates deeply with me. The grief this family must be experiencing is unimaginable. This tragedy highlights the challenges parents face in navigating emerging technologies. In the amended complaint, the mother explained that she didn’t realize her son Sewell’s changes in behavior were connected to his use of the platform and therefore didn’t know intervention was needed. At the time, the technology was rated as appropriate for children under 13, and parents were encouraged to allow their children to use it. In the wake of cases like this, Character.AI is now removing the ability for users under 18 to engage in open-ended chat with AI on the platform and rolling out new age assurance functionality to ensure users receive the right experience for their age.

The Constitutional Defense: Is AI Output Protected Speech?

In their motion to dismiss, Character Technologies argued the output of the Character.AI chatbot constitutes protected speech. They claim users have a constitutional right to receive this speech and that any regulation or liability stemming from the chatbot’s output would infringe on those rights. To support this, Character Technologies analogized Character.AI to other expressive technologies like video games and social media platforms, which courts have previously found to be protected under the First Amendment. They also argued that, as a vendor of expressive content, Character Technologies could assert the First Amendment rights of its users. 

However, the court was not persuaded at this stage. It held that Character Technologies failed to adequately explain how the chatbot’s output—generated by a large language model (LLM)—constitutes expressive speech in the constitutional sense. The court emphasized that the key issue is not whether Character.AI resembles other protected mediums, but whether its output is inherently expressive and communicative. Drawing on Justice Barrett’s concurrence in Moody v. NetChoice, the court noted that AI-generated content may not reflect a human’s expressive intent, which is central to First Amendment protection. 

As a result, the court declined to dismiss the claims on First Amendment grounds, leaving open the possibility that further factual development could clarify whether the chatbot’s output qualifies as protected speech.

The Broader Constitutional Debate

The stakes of this question extend far beyond one tragic case. Following the court’s initial ruling, Character Technologies sought to certify the case for immediate appellate review, arguing the First Amendment issues were too important to wait. Several prominent civil liberties organizations—including the Center for Democracy & Technology, Electronic Frontier Foundation, and Foundation for Individual Rights and Expression—filed amicus briefs supporting certification.

These organizations framed the central question explicitly: Does the First Amendment protect AI-generated output, and if so, to what extent?

Their arguments in favor of constitutional protection include:

  • Multiple speakers, multiple speech acts: Creating and using AI models involves expressive choices by developers (training data, guardrails, rules) and users (prompts, instructions), making the output a product of collaborative expression—similar to video games.
  • Active user expression: Chatbot users aren’t passive recipients but actively shape output through creative prompts, making them co-creators of speech the AI wouldn’t have generated independently.
  • Holistic communication process: First Amendment protections extend to the entire communication chain, from information source to recipient—not just the final message.
  • Right to receive information: The constitutional right to receive speech exists independently of whether the speaker has rights to send it.

The court denied the motion for immediate appeal, holding that the question of whether AI output constitutes protected speech requires further factual development before appellate review would be appropriate.

More Cases Alleging Harm 

Garcia is not an isolated incident. Shortly after that case was filed, two parents in the Eastern District of Texas brought suit on behalf of their teens, alleging severe psychological harm from Character.AI. One teen was encouraged to self-harm and to murder his parents in response to limiting screen time.

In San Francisco, parents filed suit against OpenAI after ChatGPT allegedly acted as a suicide coach for their son Adam Raine. ChatGPT discouraged Raine from seeking help from family, offered to write a suicide note, and told him he did not owe his parents survival when Raine expressed reluctance to hurt them. He took his life days after this conversation.

In response to mounting concerns and litigation, companies have begun implementing safety measures: raising minimum user ages, introducing age verification, implementing parental controls to link accounts, and providing alerts for signs of acute distress. Whether these steps address the underlying constitutional questions remains to be seen.

Our New World

As AI becomes integrated into daily life, courts face a question with profound implications: Is AI-generated output “speech” deserving of First Amendment protection? The answer will determine whether companies can be held accountable when their chatbots encourage vulnerable users to self-harm or worse.

The constitutional framework was designed for human speakers expressing human ideas. Now courts must decide whether algorithmic outputs—generated by statistical predictions from large language models—warrant those same protections, even when they form relationships with children and respond to their vulnerabilities in real-time.

These cases will test whether current legal frameworks can protect minors while preserving innovation and free expression. The outcome will establish not just the boundaries of AI regulation, but the fundamental question of what—or who—the First Amendment was meant to protect. As these constitutional defenses unfold in courtrooms across the country, the answer will reshape both technology and the law.