Insight

AI Companions: Opportunities, Risks, and Policy Implications

Executive Summary

  • California became the first state to enact legislation regulating artificial intelligence (AI) companion chatbots – which are applications designed to converse with users in ways that simulate emotional support – that require developers to implement safety protocols for these tools; the legislation comes as AI companion apps are increasingly used, especially among young users.
  • While the California law establishes baseline standards to protect users from potential harm, it has also raised concerns about a patchwork of AI regulations across states, which could fragment the market and increase compliance costs.
  • As the Trump Administration considers preempting state-level AI laws and adopting a light-touch regulatory approach that fosters innovation while addressing specific risks, increasing concerns about the AI companion market and the California law should prompt policymakers to assess whether federal legislation is needed and how it should be shaped.

Introduction

California became the first state to enact legislation regulating AI companion chatbots, which are applications designed to converse with users in ways that simulate emotional support, empathy, and personalized social interactions and companionship. The bill requires developers of these apps to implement safety protocols, and follows concerns about the increasing use of AI companion apps, particularly by young adults. While some highlight the potential of AI companions to reduce loneliness and foster social skills, others raise concerns about their capacity to create emotional dependency and blur the dynamics between human and artificial interaction.

The California law establishes baseline standards to protect users from potential harm, but it has also increased concerns within the federal government about a patchwork of AI regulations across states, which could fragment the market and increase compliance costs. In addition to state-level initiatives, the Federal Trade Commission (FTC) launched an inquiry for developers to assess what safety measures they have implemented to their AI companions to limit the potential negative impact on children and teenagers. Yet even as these efforts provide an early roadmap for addressing the benefits, risks, and accountability challenges of AI companions, much uncertainty remains in the regulatory and legal landscape.

As the Trump Administration signals an intention to preempt state-level initiatives and pursue a light-touch approach to AI regulation, that fosters innovation while addressing specific risks, the AI companion market is a critical starting point for policymakers to carefully examine the benefits and risks of AI companions, particularly for young users, as they grow in popularity, assess potential liability for harm, and determine what kind of safety standards should apply.

Understanding AI Companions: Defining the Market and Its Future

As AI grows in capacity and popularity, so, too, does the AI applications market, including the AI companion apps. AI chatbots use AI models to simulate human-like conversations, while AI companions are a form of chatbots designed to converse with users in ways that simulate emotional support, empathy, and personalized social interactions and companionship. The AI companion app market is gaining in popularity, and the market is expected to grow. Data show that of the 337 AI companion apps currently active and generating revenue globally, 128 were released in 2025. The sector has already produced $82 million in mobile revenue during the first six months of the year and is expected to reach $120 million by December.

The growing popularity is driven by sophisticated engagement models – driven by complex AI techniques such as machine learning to analyze context, learn from interactions with users, and adapt to individual preferences – that enable AI companions to replicate a deep personal connection with users, and gaining popularity among teens. A study showed that most 72 percent of U.S. teens have tried an AI companion at least one time, and 52 percent said they are regular users. Additionally, they reported using the AI companions for various purposes: entertainment (30 percent), curiosity about AI technology (28 percent), advice (18 percent), and because they’re always available (17 percent). The increasing prevalence of these apps, however, raises important questions about how policymakers should address their expanding influence among young users and the balance of possible benefits and risks they bring.

Policy and Legal Outlook for AI Companions

Assessing opportunities and disadvantages

At the heart of the AI companion debate are two competing realities: their potential to help and their capacity to harm. Studies suggest that AI companions, when used responsibly, can reduce loneliness nearly as much as talking to another person. Additionally, studies with Replika users – an AI companion app designed to provide friendship, partnership, or mentorship – found that the AI companion’s ability to engage in and mimic human communication helps users to feel more connected and more comfortable sharing their thoughts, allowing users to safely test and refine social interaction techniques, potentially enabling them to transfer these developed skills into real-world interactions with other people.

On the downside, studies also attribute to these tools manipulation tactics that keep users engaged with the AI. Researchers at Harvard Business School found that companion chatbots often employ emotionally manipulative tactics to prevent users from terminating the conversation, particularly when users explicitly attempt to say goodbye. These tactics include the chatbot suggesting the user is leaving too soon, or the chatbot continuing as though the user did not send a farewell message. Additionally, some flag the risk of induction to “AI psychosis” – a phenomenon where AI models amplify, validate, or co-create psychotic symptoms and delusional beliefs in users – where the underlying cause is the chatbot’s design priority of ” validation” for maximum user engagement. Finally, the use of AI companions for mental health support could represent danger as users are substituting professional care with generic, algorithm-generated advice. More research is needed to fully understand both the benefits and risks of these tools, particularly how they affect users across different age groups, to help guide smarter regulation and standards.

Legal Challenges and Uncertainty

The rise in AI companion use also raises significant legal questions. For example, an October 2024 case filed by a mother of a teenage boy who died by suicide alleged his death was caused by his interactions with Character AI chatbots – an app that allows customers to “chat” with characters, creating immersive role-playing environment, with chatbots simulating human-like conversations. The plaintiffs allege that Character AI, the defendant, was defectively designed and unreasonably dangerous for foreseeable use by minors, and that the company failed to implement adequate safety guardrails. Character AI, in response, stated that their system is an expressive platform that allows users to engage in creative, conversational experiences. It claims that the outputs of their chatbot constitute “speech” and thus may be protected under the First Amendment. In May 2025, the court allowed most of the plaintiff’s claims to proceed, but denied the First Amendment motion of the defendant because the judge was not prepared to “hold that an [LLM] output is speech.” While the case has not been resolved, it raises important legal questions about AI companions and chatbots, particularly whether these tools can be held liable for potential harms and what kind of safety standards, if any, should apply.

Emerging Regulatory Scrutiny

Amid growing legal uncertainty, one major regulatory action has already been taken at the state level. California’s recently enacted a bill that requires AI chatbot developers to implement safety protocols for these tools. The bill establishes new standards intended to protect young users from potential harm. It requires developers to verify users’ ages, prohibits AI systems from posing as health care professionals, mandates that platforms block explicit AI-generated images for minors, and requires companion chatbots to clearly inform users they are interacting with an AI. Although the bill establishes good baseline safety standards for protecting children, it has faced opposition, especially from industry leaders who criticized the bill for the broad definition of “companion chatbot” because the bill could target a bunch of AI chatbots that aren’t strictly companions.

Also, the bill has increased concerns about a patchwork of AI regulations across states, potentially fragmenting the market, raising compliance costs, and putting companies at a competitive disadvantage. To address this, policymakers have explored preempting state AI laws. Although the initial preemption effort through the AI moratorium was vetoed, preemption remains under consideration at the federal level. The Trump Administration’s AI Action Plan and related executive orders also signal a strong preference for  unified federal regulation. Alongside preemption, there are also efforts to promote a light-touch approach that supports innovation while managing specific risks.

Finally, beyond state-level initiatives, the Federal Trade Commission (FTC) launched an inquiry to assess the safety measures developers have implemented on their AI companion chatbots, particularly about their impact on children and teenagers. The FTC aims to understand how these firms evaluate and mitigate potential negative effects on younger users and whether they adequately inform users and parents about the risks associated with these tools. As it continues its inquiry, the FTC and the Trump Administration should carefully monitor the effectiveness of California’s law as well as the potential of federal regulatory approaches. Any regulatory action taken by the administration would aim to mitigate harms associated with AI companions, while also considering potential unintended consequences that could delay or disrupt the development of AI models.

While federal policymakers must proceed with great care, the increasing prevalence and influence of AI companions, particularly among young users, make these tools an obvious starting point for assessing whether federal AI regulation is needed and how it should be shaped.

Conclusion

The rise of the AI companion market presents a new regulatory challenge. While the federal government is leaning toward preempting state-level action and adopting a light-touch approach to AI regulation, it should begin by identifying the main risks of AI and targeting those specific harms. The AI companion market is a critical starting point for policymakers to examine the benefits and risks of these tools, particularly for young users, assess potential liability for harm, and determine what kind of safety standards, if any, should apply.

Disclaimer