Insight
February 26, 2026
Social Media on Trial: Design vs. Speech
Executive Summary
- A landmark trial regarding social media platforms’ alleged design of youth-addictive products recently began in a California state court, with another such case expected soon in one of the state’s federal district courts; specifically, these cases involve thousands of claims arguing that platforms – including Meta and YouTube – designed products that harm young users, prioritizing growth and engagement over user safety.
- Unlike past litigation that has focused on what content social media presents to users, these cases center on how that content is presented; while protections under the First Amendment and Section 230 of the Communications Decency Act have shielded social media companies from content-related lawsuits, courts are now being asked to determine whether these protections extend to claims centered on “design.”
- Congress has not yet passed major legislation related to social media content, with key legislation – the Kids Online Safety Act and Children’s Online Privacy Protection Act 2.0 – languishing for years; depending on the outcomes of these and related cases, lawmakers could face renewed pressure to act.
Introduction
A landmark trial regarding social media platforms’ alleged design of youth-addictive products recently began in a California state court, with another such case expected soon in one of the state’s federal district courts. These legal proceedings involve thousands of claims arguing that platforms designed addictive products that harm young users – potentially leading to depression, anxiety and other mental health issues – prioritizing growth and engagement over user safety. The cases bring together major social media companies, including Meta, Google, Snap, and TikTok (the last two settled before trial) and plaintiffs include youth, their families, and school districts.
Unlike past litigation that has focused on what content social media presents to users, these cases center on how that content is presented. Notably, the lawsuits suggest that shielding platforms from speech protections has its limitations as they may potentially leave young users vulnerable to risks inherent from the platforms’ designs and features. While protections under the First Amendment and Section 230 of the Communications Decency Act have shielded social media companies from content-related lawsuits, courts are now being asked to determine whether social media is addictive and contributes to mental health harm, whether the companies acted negligently, and whether they had a duty to warn young users about potential risks.
Congress has not yet passed major legislation related to social media content, with key legislation – the Kids Online Safety Act (KOSA) and Children’s Online Privacy Protection Act 2.0 (COPPA 2.0) – languishing for years; depending on the outcomes of these and related cases, lawmakers could face renewed pressure to act.
Legal Proceedings and Alleged Harms
Social media companies are under growing scrutiny as a landmark trial recently began in a Los Angeles, California state court, and another such case is expected in an Oakland, California federal district court this summer. Both cases bring together thousands of claims alleging that social media platforms prioritized engagement over safety, intentionally designing addictive products that harm young users – potentially leading to depression, anxiety and other mental health problems. The ongoing trial is the product of Los Angeles’ Judicial Council Coordinated Proceeding (JCCP), a state level legal process in California that coordinates complex lawsuits with multiple plaintiffs and defendants. The JCCP selected the K.G.M. case as the first to represent all the lawsuits. It was filed by a now 20-year-old who claims social media features such as autoplay and infinite scroll contributed to addiction, depression and suicidal thoughts. The outcome of this case will likely influence thousands of similar lawsuits. The second legal proceeding is a multi-district litigation, a federal level legal process that involves similar claims related to social media addiction. Its first trials are scheduled to begin in June in Oakland. This proceeding centers on complaints by school districts that social media addiction has disrupted their schools, and that companies fail to warn the public about the risks.
Section 230, First Amendment, and the Decoupling of Design and Speech
For the first time, social media companies are being called to defend their core business models against allegations that their design decisions caused harm to children. Unlike past litigation that focused mainly on what content social media presents to users, these cases focus on the design decisions behind how that content is presented. In the past, when cases centered on content, courts have largely viewed algorithms, feeds, and notifications as neutral ways of curating speech, further triggering Section 230 and First Amendment protections. Now plaintiffs argue that platforms designed their product in a way that shapes user behavior – such as using behavioral science to create addictive features that maximize user engagement. This argument suggests the mechanics are not just neutral tools to share speech, but design choices that follow the interest of the platforms while harming users. This shift reframes social media platforms not simply as publishers of content, but as products, where algorithms function more as design choices than simply editorial features.
These legal actions raise important questions for judges and juries to evaluate. A key moment in the ongoing K.G.M. case occurred in November, when the California state court ruled on Meta’s Motion for Summary Judgment, finding that even if harmful content was created by others, the design of the platform itself could still be harmful and such harm might not be protected by Section 230. A decision is expected soon in the multi-district litigation on a similar motion for summary judgment by the companies.
Policy Implications
These proceedings come at a time of great advances in algorithmic personalization and as advanced technologies are transforming online platforms. Their resolution will have ramifications for efforts to protect kids and teens online. As social media has grown in popularity, the concept of social media addiction has been increasingly discussed. Nevertheless, it is not widely recognized in medical and psychological contexts, and the battle among plaintiffs and defendants will shape perceptions about risks associated with social media platforms. It is broadly understood that digital media can negatively affect some young people, but also provide connection, support, and community for others. Court decisions that include clear determinations about social media addiction and potential liability for associated health harms will have major policy implications. If courts conclude that algorithms and social media feeds are linked to how speech is delivered, regulatory efforts targeting platform design may face heightened constitutional review. If design elements are treated as separate from speech, it could bring new paths for regulation. Depending on the ultimate outcomes of these and related cases, lawmakers could face renewed pressure to act.
Current and Future Regulatory Outlook
As judges and juries review the competing claims, Congress remains divided over whether to modernize federal child online safety legislation, and if so, how. The House Energy and Commerce Committee, Subcommittee on Commerce, Manufacturing, and Trade approved the Kids Online Safety Act and the Children and Teens’ Online Privacy Protection Act in December. No further legislative action is currently anticipated. As previous American Action Forum analysis explains, the debate continues over key issues, including weakened duty-of-care obligations and privacy provisions that may limit the deployment of effective age-assurance techniques. When addressing online safety, privacy, and liability, policymakers face tradeoffs that highlight the challenges of crafting meaningful child online safety legislation. Narrowing Section 230 and increasing liability frameworks may encourage platforms to adopt more aggressive content moderation practices, raising concerns about the over-removal of lawful speech. At the same time, measures intended to enhance child safety, including age verification, may require expanded data collection, creating new privacy risks.
While the regulatory outlook remains uncertain, these trials could help clarify where the lines are drawn between product design, speech, and platform responsibility. Those decisions may not only pressure regulatory action, but also shape the future of social media regulation and influence broader debates over algorithmic systems and artificial intelligence, areas where platform design and user expression overlap.





