Insight

Lawmakers’ Misguided Approach to Social Media Content Moderation

Executive Summary

  • Elon Musk’s purchase of Twitter has amplified the already contentious debate over the proper scope of content moderation on social media websites.
  • Both sides of the political aisle dislike many of the processes and decisions related to content moderation made by social media companies; in response, lawmakers have proposed their own content moderation regulations.
  • Republican proposals for regulation include common carriage requirements, which would require companies to transfer all information regardless of content; Democratic proposals, meanwhile, vary but most prominently call for “algorithmic accountability,” which would likely entail curtailing certain Section 230 protections for social media websites.
  • Both these proposals could violate the First Amendment’s protection of social media’s editorial judgment and lead to policy outcomes that would likely harm the quality of moderation rather than improve it; additionally, lawmakers’ threats of punitive legislation against social media companies distort their content moderation policies and further erode trust in these systems.

Introduction

In April, Elon Musk amplified already fierce debate over the proper scope of content moderation on social media websites by purchasing Twitter for $44 billion. The stated reason for his purchase was a desire to push the company to embrace a free speech model for content moderation, ensuring users can share more information on the platform without moderators silencing such discussion. Many on the political left immediately expressed concerns, with even the White House chiming in to argue that tougher scrutiny from social media companies should be employed to prevent the spread of false information on a range of political issues and, in particular, the COVID-19 pandemic.

Most agree that social media companies should moderate content to a degree, but engaging in content moderation requires answering very difficult questions, and these decisions can have a significant impact on which news and information people can and cannot access. When a platform makes any decision to limit the reach of, or access to, content, undoubtedly many users will disagree and loudly voice their opinions. What’s more, such moderation, which is frequently imperfect, often feeds the perception that social media staff are biased in favor of their own parochial interests. As a result, pressure from both sides of the aisle will only continue to mount, as Republicans claim their content is being targeted by left-leaning staff and Democrats argue that social media should take a greater role in addressing disinformation and hate speech online.

To address their respective concerns, lawmakers have proposed different bills that would drastically alter how social media platforms moderate user content. Republicans have suggested, among other ideas, applying common carrier requirements to social media companies as part of an effort to regulate these firms like telephony, meaning that they would need to transfer all user information, regardless of content.

Democrats, however, argue for what they call “algorithmic accountability” that would entail peeling back pieces of Section 230 of the Communications Decency Act, which provides legal immunity to platforms for what their users post. This, they assert, would allow regulators to use a heavier hand to combat disinformation and hate speech online.

Both these proposals could be violative of the First Amendment and, even if they weren’t, could have a negative impact on content moderation as a whole. Ideally, social media companies would respond to consumer concerns and pressures from the market, but lawmakers have turned to crafting legislation—and often, the threat of legislation—to influence moderation practices, further politicizing content moderation and making it more difficult for a nuanced discussion about targeted improvements and self-government changes.

Government intervention in the content moderation decisions of social media companies is fraught with legal and policy dangers. At the same time, the lack of a clear solution means the current debates will not subside any time soon. This insight explains the potential pitfalls of Republicans’ and Democrats’ proposals for the regulation of social media and online speech, as well as the risk of distorting market forces by political jawboning.

Forcing Companies to Host More Content Through Common Carriage Regimes

On the right, experts and lawmakers have explored common carriage requirements designed to treat social media like telephony, an arrangement requiring platforms to transfer all information regardless of its content. This theory rose to prominence in conservative circles after Supreme Court Justice Clarence Thomas explored the idea in his concurrence in Biden v. Knight First Amendment Institute, a case looking into whether social media could qualify as a public forum for First Amendment analysis (an idea the Court dismissed). In Thomas’ view, because social media is a service offered of public interest, and some firms arguably have dominant market power that limits the abilities of users to go elsewhere, these services are akin to services like telephony, which must “serve all customers alike, without discrimination.” Only in return for a non-discrimination approach should social media companies receive immunity for the content users post, the common carriage requirement holds. From a constitutional perspective, some cases have distinguished must-carry provisions in other technologies, and legal scholars have made the argument that a law forbidding platforms from discriminating based on content, limited to material readers deliberately choose to read, could be constitutionally permissible.

Yet major flaws exist with this approach. First, simply declaring social media to be a common carriage service doesn’t make it so. Voice telephony, the most analogous common carrier to social media, involves the private communication between two users, rather than a public communication which contains fundamentally expressive speech. In that sense, social media is much more like a newspaper – which is not in any way a common carrier – than a voice telephony network. Social media services also do not hold themselves out to the public at large, but instead impose specific rules and regulations before a person can join the service. Second, even if a regulation is narrowly tailored, and focused on must-carry, there are significant distinctions between social media and the types of entities that have been forced to carry speech in the past. For example, courts upheld must-carry provisions for cable largely because its networks use physical infrastructure in the public rights-of-way, giving the operator a physical bottleneck on communications. Social media has no such physical infrastructure.

But even if such an approach could withstand judicial scrutiny, requiring platforms to host all content would undoubtedly come with risks and concerns. If applied broadly, this would require platforms to leave up all kinds of bad content as long as it doesn’t violate the law, which neither party wants. Even if approached more narrowly—perhaps under a common carriage requirement applying only to political content—platforms would still need to draw lines regarding which content fits into this framework and which does not. These companies are risk averse, so they may fall back onto common law protections, which absolve them of liability if they refuse to moderate at all, or even forbid political content to avoid any risk of violation.

Targeting Content Promotion by Legislating Algorithms

On the left, many want social media companies to take down more content that can lead to harms. Of course, Congress will struggle to ban any kind of speech on social media due to the First Amendment. As a workaround, the proposals tend to entail the removal or partial rollback of Section 230 protections, which are designed to give platforms the legal certainty that their moderation of content will not lead to liability for anything the platform does not remove. Of note, some Democrats have taken a critical look at these platforms’ algorithmic amplification of content and questioned whether Section 230 protections should apply to content recommended by social media companies to users.

As the argument goes, social media provides a feedback loop to individuals because its algorithms tend to share the most extreme and harmful information: Such content drives engagement, and therefore increases advertising revenues. To address this, legislators have considered bills such as the Justice Against Malicious Algorithms Act, which would limit Section 230’s protections when a company’s use of algorithmic amplification causes physical or emotional injuries.

This approach may survive First Amendment scrutiny if challenged, but would run into the same fundamental issues that exist with any attempts to reform Section 230. Section 230 builds on the First Amendment by encouraging services to moderate content without fear of liability; reimposing that risk will force companies to essentially dumb down their systems and limit recommendations to users, making it more difficult to share and find content users wish to see. Companies could likely rely on simple algorithms that deliver content as soon as it is posted, but it would drastically limit the ability for new content creators to reach a larger audience and for users to find content that they wish to see.

Regardless of the bad policy, the approach also highlights the challenge that lawmakers face: They don’t like how social media companies moderate content, but the First Amendment bars them from directly regulating the moderation process. With algorithmic amplification tied to Section 230 protections, lawmakers can attempt to circumvent the First Amendment by specifically targeting the functionality of the website itself. That way, lawmakers can push companies to over-remove content to ensure that no harm can stem from the service.

The Perils of Government Jawboning

As lawmakers consider ways to determine how platforms regulate content—and circumvent the First Amendment while doing so—they risk worsening an inherent problem with content moderation that will lead to significant harms for users of these services: politicizing content moderation. Such meddling would not only fail to improve content moderation on social media but would likely leave both Democrats and Republicans unsatisfied with the result.

Current content moderation isn’t perfect, but it is driven largely by the needs of the users and advertisers wishing to reach these consumers. When users voice concerns about content moderation, companies take these concerns into consideration about future decisions and policies. Different platforms can employ different standards that serve as a point of competition: If users dislike one service, they can move to another.

When lawmakers threaten legislation in response to specific decisions from a company – a practice called jawboning – it drastically distorts actual consumer preferences. No longer is the company largely responding to, and acting in accordance with, general public opinion about content moderation decisions but rather a legislative threat that will have a direct impact on its practices. When the government threatens a private company to make a decision with which it has no choice but to comply, courts generally hold this to be unconstitutional. Whether such a threat exists isn’t always clear, however, and so jawboning may not always rise to the level of unconstitutionality.

Social media has quickly become a battleground for ideas and the future of politics, and as such, both political parties have immense incentive to steer content moderation decisions in their favor. But political jawboning will only further embolden both sides to respond in turn with increasing pressure and threats. These continuing disputes will distort content moderation norms and erode the already brittle confidence that users have in these systems.

Conclusion

Most agree that content moderation on social media platforms needs improvement. That said, content moderation will never satisfy everyone. Users can and should influence platforms’ decision-making by voicing concerns, or moving to other services if a given company’s approach doesn’t meet their needs. The approach currently pursued by lawmakers – the crafting of legislation that circumvents the First Amendment to dictate what content is allowed and what isn’t – would distort the moderation process and lower trust in the ability of content moderation systems to fairly appraise the appropriateness of user communications. Nevertheless, the lack of a clear solution to the dispute over moderation likely means these debates will not stop anytime soon; lawmakers must be careful not to make the problem even worse.

Disclaimer