Insight

No, Big Tech Did Not Violate the First Amendment

Executive Summary

  • A number of large technology companies have recently acted to limit access by President Trump and his supporters, prompting claims from some that the companies are violating free speech principles and should have some legal protections, most notably Section 230, revoked.
  • As private actors, online platforms can enforce their terms of service and remove users if they deem it necessary, and as a result the actions of these private actors do not violate the First Amendment.
  • Revoking Section 230 protections, which provide a liability shield against suits over user-generated content, would ironically result in much more moderation and content removal of opinions across the political spectrum by platforms.
  • Antitrust law should continue to be used to maintain a competitive market and not to achieve other policy goals, including concerns around online speech, for which it is a poor fit.

Introduction

Following the riot at the United States Capitol, many prominent tech companies have taken steps to deplatform or remove various individuals and services that have been associated with the violence. Twitter, Facebook, and Instagram have all suspended President Donald Trump’s accounts. Apple and Google removed alternative social media app Parler from their app stores for hosting content associated with the violence at the Capitol and a continued failure to engage in moderation to prevent further incitement to violence. Other smaller platforms have also taken steps to limit the use of their services. For example, payment processor Stripe and shopping service Shopify suspended their services from several Trump campaign-associated sources. Some have criticized these moves as unfair discrimination and argued that they represent censorship or violate the First Amendment. Others have raised concerns that these moves reflect an overconcentration of power in the hands of the largest tech companies in a way that stifles speech and harms the free flow of ideas. These critics have made renewed calls for tech regulation and potential modification to Section 230, a liability shield that protects content moderation decisions, and allegations that “Big Tech” companies are monopolies.

Private Actors and the First Amendment

Twitter, Facebook, Amazon, and other private companies have decided to remove accounts or suspend services. It is often forgotten, however, that these companies have First Amendment rights themselves. They are allowed to decide what content to carry via their terms of service and whom to allow to have accounts.

No state actors silenced voices nor compelled these companies to remove the content. As a result, there is not a First Amendment violation. The courts have repeatedly held that private social media is neither a “public square” nor a state actor that must uphold the First Amendment. As such, any cases involving these bans are likely to fail.

In the past, many on the right have defended the rights of private companies to make decisions about whom to associate with or what cakes to bake. These rights have their limits when it comes to civil rights laws concerning discrimination, but in general companies are allowed to decide whom to do business with. Social media platforms and other online services are providing conduits for speech that have been a great advantage to a wide range of individuals and viewpoints that might have been ignored or too small for traditional media, including conservative voices.

Even more than the typical user, the president is far from without a microphone. More easily than any other American or policymaker, he could directly address the American public through traditional media means. This is not a case of Big Tech violating the president or other users’ First Amendment rights, but better understood as their decision to exercise their own.

The Impact of Section 230

Section 230—the law that provides liability protection for online platforms from suits over user-generated content, allowing them to engage in content moderation as they see fit—has also faced further criticism in light of the recent decisions by platforms and the use of social media platforms by bad actors. For example, following Twitter’s decision to ban President Trump, Senator Lindsey Graham (R-SC) tweeted, “I’m more determined than ever to strip Section 230 protections from Big Tech (Twitter) that let them be immune from lawsuits.” But removing Section 230 would be unlikely to have changed the actions that occurred and in fact would likely lead many platforms to moderate content even more aggressively.

Because of Section 230, online platforms can allow controversial ideas to be discussed on their services (within the limits of federal law) and not fear that they will be sued for carrying such content. There are notable limitations to this liability protection, such as child pornography and sex trafficking, but in most cases platforms are free to make their own decisions regarding what content is allowed on their service. They are also allowed to determine how they enforce these guidelines, what content violates them, and what if any users to remove.

Without Section 230, platforms could be held liable for controversial content posted by their users. Thus, platforms would face a moderator’s dilemma of either engaging in heavy-handed moderation to prevent anything potentially offensive or harmful (that could trigger a suit against the platform for hosting it) or not engaging in moderation at all. Most online platforms would probably choose to engage in the heavy-handed moderation, since as Reddit co-founder Alexis Ohanian pointed out in a Tweet, “What they all eventually learn is users WANT moderation.”

Those on the right do not hold a monopoly on the criticisms of Section 230; many on the left have criticized it for some time. Considering the role of apps such as Parler in the violence and the criticism by some on the left that mainstream platforms did not take action against President Trump’s accounts soon enough, the criticism of Section 230 is likely to continue from both sides of the aisle. It is important to remember, however, that overzealous content moderation would not apply to only one viewpoint or to the material one side does not like. The risk of potentially company-crushing litigation could chill companies from carrying speech around any number of controversial topics, such as the #MeToo movement, for fear that they could be liable for defamation or other harms. The result would be that important conversations about controversial topics would be more limited and more difficult to have.

There will be many concerns about the role of social media in the events of January 6 and many concerns about the decisions made regarding accounts and services in the aftermath. Changes to Section 230, however, would likely stifle non-harmful speech around sensitive topics as well .

Big Tech’s Actions are Not Evidence of a Lack of Competition

Finally, some of the president’s supporters and conservative technology critics have pointed to the recent actions—particularly by Apple, Google, and Amazon against the alternative social media site Parler that have led to the service being offline–as evidence that Big Tech is too powerful. These companies hold a monopolistic position that stifles not only competition but speech, the argument goes, and requires an antitrust response. But the internet is far more than Big Tech, and these actions themselves do not necessarily indicate collusion.

Some are alleging that the large tech companies are coordinating their actions to silence conservative voices. For its part, Parler has filed an antitrust lawsuit against AWS for its decision to stop hosting its product, alleging collusion between the website-hosting service and Twitter, Parler’s primary competition. These allegations of a conspiracy are unlikely to succeed, as there is no plausibility based on the complaint that such an agreement exists. Many online platforms have similar guidelines that govern the use of their app stores or services, and so it makes sense that they would react in a similar way to a broad phenomenon. For example, Apple removed Parler for hosting objectionable content in the form of calls of direct threats for violence and failing to develop sufficient levels of moderation guidelines to respond to this objectionable content. Amazon Web Services (AWS) informed Parler it would cease to provide services for failing to remove content that is an incitement to violence, and such a decision could easily be grounded in the company’s own acceptable use policy. It seems unlikely that collusion is driving these decisions rather than broader social pressures.

Some of the complaints about these decisions smack of hypocrisy. Just recently, some advocates for regulating Big Tech, including Senator Josh Hawley, applauded MasterCard for terminating the use of its services on Pornhub and urged other payment services to do the same. Now that a similar action has been taken against Parler, however, some of these same critics are alleging that “tech giants” are conspiring against conservatives rather than exercising their rights as private companies and enforcing their own policies.

Looking more broadly, Amazon and app stores were never the only options for server-hosting and access to consumers. While Amazon may be a leader in cloud services, there are numerous other competitors including Oracle and IBM from which Parler could attempt to negotiate services. Parler’s creators also could have chosen to make the app available for direct download rather than through an app store. In short, Parler is reaping the consequences of the choices its founders made, not facing a conspiracy of Big Tech to remove a competitor.

To be sure, some have raised questions about whether policymakers should distinguish between decisions made at the level of individual sites and decisions made at a more foundational level (for example, website hosts or internet service providers). Such an approach would most likely require such services to be treated as a utility and result in significant changes to Internet business models and consumer.  Such an approach would be a significant policy change and the complete impact of such a shift cannot be addressed in full here. These companies are private actors with their own terms of service who are allowed to make their decisions about whom to provide services to within the bounds of their terms of service and current law. Policymakers and consumers should be concerned about calls to use antitrust enforcement against Big Tech in regard to private companies’ choices to enforce their policies. Private antitrust litigation can result in higher prices or fewer options for consumers. Legislative action could have an even greater impact, as it could include content moderation decisions as a potential actionable harm for antitrust. Proposals to break up Big Tech would not resolve concerns about content moderation or access to services. After all, smaller companies would still face decisions about whom to allow to use their services and in the case of harmful or unpopular content have the same incentives to choose whom to serve.

The technology ecosystem remains an innovative and dynamic market. Choosing to change antitrust laws to achieve preferred outcomes regarding Big Tech would not only likely fail to achieve those outcomes, but also harm consumers and a vibrant sector of the economy.

Conclusion

Whether you applaud or detest the recent decisions made by online platforms, it is important to remember that these are private actors and not the government. While these actions did not violate the First Amendment, many policy proposals in response could yield greater intervention by the government into speech and private actions that would give rise to First Amendment concerns. The internet is far more than just Big Tech.

Disclaimer