October 28, 2021
Lessons from Past Technopanics for Current Social Media Debates
- The current push for social media regulation is intended to mitigate the platforms’ impact on mental health issues, yet the concerns are largely fueled by information that was incomplete or taken out of context; current evidence shows its impact to be inconclusive and suggests that more research is needed.
- Some of these concerns, such as the potential impact of social media on teens and other vulnerable groups, are a natural phenomenon in the widespread adoption of novel technologies, as its changing application and a less tech-savvy user base often reveal unforeseen shortcomings.
- While there are real challenges that have arisen due to the novelty of these technologies, policymakers must be cautious not to fall prey to “technopanics” by passing highly disruptive legislation that could yield unintended consequences.
The recent Wall Street Journal (WSJ) report regarding Facebook’s internal research on the mental health impacts on its users, mainly teenage girls, has re-ignited the debate about the role social media plays in exacerbating mental health issues. This concern is one of many among policymakers on both sides of the aisle, who have also voiced legitimate concerns regarding the potential contribution of social media platforms to problems such as the sale of counterfeit items, sex trafficking, bullying, and political radicalization.
These concerns have been one of the justifications in the push for regulation of “Big Tech,” namely Google, Amazon, Facebook, and Apple. Most of the proposals focus on the platforms’ content moderation practices, aiming to increase the legal liability platforms hold for hosting undesirable content. While real challenges have arisen due to the novelty of these technologies, policymakers must be cautious not to fall prey to “technopanics” by passing highly disruptive legislation that could yield unintended consequences.
On September 14, 2021, the Wall Street Journal (WSJ) published a report that claimed that internal Facebook studies, shared by Frances Haugen, a former Facebook employee, established that Instagram was harmful to adolescent girls’ mental health. Facebook published a blog post in response on September 26, 2021, contending that the WSJ report was misleading and mischaracterized the results of Facebook’s internal research. As Facebook highlights, its internal research concluded that in 11 out of the 12 major issues surveyed, Instagram has had an overall positive impact for teenage girls. The blog post also provided the annotations to the slide decks featured in the report, which provide additional context.
In the wake of Haugen’s testimony to the Senate Commerce Committee on the matter, animosity toward social media seems to have hit an all-time high. Some in Congress have used her testimony as justification to push for heavy-handed regulation of content moderation in social media. Nonetheless, various scholars have warned that Congress’ rush to regulate might be fueled by ungrounded fears and an exaggerated interpretation of current data.
Is Social Media Making Our Lives Worse?
Subsequent reports reveal that the evidence regarding negative impacts from teenagers’ use of social media is inconclusive. While many of these concerning issues are indeed present in social media use, most were present before the rise of these platforms, some for decades. As social media is a tool where people express themselves and interact with each other, the same problems that existed in personal interactions prior to the creation of social media will naturally persist.
This seems to be the case with cyberbullying and eating disorders. For example, a study by Catherine Bradshaw from the University of Virginia found that in 2005, 30 percent of students reported being victims of bullying. While cyberbullying is currently the fastest growing type of bullying, the existence of bullying on social media is a symptom of its prior existence in society; social media is not the cause of this behavior. Others have also pointed out how body image issues are not a problem exclusive to social media, but present in other non-internet spheres, such as celebrity culture and fashion magazines.
New Technologies Spark New Fears
The eruption of new technologies, especially as they are widely adopted in society, is often accompanied with unforeseen issues and concerns. This has been the case at multiple points in time, with the emergence of new music, video games, books, and even electricity. These fears are often due to what has been described as a “moral panic wheel,” where societal beliefs trigger the production of research in line with said beliefs, and the existence of said research then triggers the moral panic as news outlets, politicians, and others focus on the research that confirms said fears.
This vicious cycle seems to be a natural reaction to new technologies, as they grow beyond their initial niche audiences into the mainstream, where less savvy users often face more problems, or their use extends beyond the initially planned application of the technologies. The Information Technology & Innovation Foundation (ITIF) has named this pattern “panic cycles,” where the widespread adoption of new products leads to the discovery of unintended consequences or unnoticed flaws, which then are overcome through a mix of social learning, policymaking, and adoption of social norms. When this point is achieved, society has learned and adapted enough to these paradigm-changing technologies that concerns that fueled initial panics have been resolved or can be easily avoided or mitigated.
Is Stronger Regulation Adequate to Fix These Problems?
While there is room for legitimate concern over some of these issues, extensive regulation is not an efficient solution. In reaction to the WSJ report regarding social media influence, policymakers renewed calls to reform or repeal Section 230 of the Communications Decency Act, a key regulation of online content moderation. Section 230 establishes that platforms are not legally liable for the content of their website when engaging in content moderation. Such proposals could affect the internet at large, rather than the handful of platforms these proposals intend to target. For example, repealing or reforming Section 230 can have serious implications for the booming gaming industry and for fast-growing online marketplaces, such as Etsy. Abandoning the consumer welfare standard, as some have proposed, would imply a radical change in antitrust law which could cease to put consumers first.
Two examples of well-intentioned regulations that have unexpectedly impacted users and online platforms are the Fight Online Sex Trafficking Act (FOSTA) and the Stop Enabling Sex Traffickers Act (SESTA). These bills, which were intended to fight sex trafficking online, have demonstrated that regulation can have unexpected impacts. For example, FOSTA and SESTA created a carve-out in Section 230 in which a company deemed to be deliberately and knowingly assisting sex trafficking operations would lose the liability protections granted by Section 230 and could be held liable for said crimes. While on paper these bills seem to have a specific and reasonable approach, their application has been notoriously inconsistent. The scope has continuously broadened, holding platforms liable even in cases where they have actively removed and reported content in a matter of days. This has led platforms to take overly precautionary approaches, thus restricting online free speech, in order to reduce the risk of legal liability. This has been the case of websites such as Tumblr, which prohibited all types of sexually explicit content to prevent any legal ramifications in the case their content moderation systems fail to filter out illegal content.
Platforms themselves have started to act to address content-related mental health concerns. For example, Instagram gradually rolled out a function that allows users to hide the Like count in their posts to move away from the “social competition” dynamics some criticize. It also announced a pilot project to launch Instagram Kids, which would allow users under 13 to sign up to the platform that would have additional parental controls and limited capabilities. This pilot, however, is now suspended. Apple has also announced its intention to include further restrictions on its iMessage app, which would restrict underage users from sending and receiving sexually explicit content. This feature has also been delayed as the announcement led to the rise of various privacy concerns which the company says it is addressing. These cases have shown the necessity of a flexible and adaptable approach that takes into account the nuances and tradeoffs inherent to content moderation and child safety, which one-size-fits-all policies often fail to address.
Current debates regarding the potential impact of social media in our lives are fueled both by facts and exaggerated fears. History has taught that as new technologies emerge and are widely adopted, unforeseen consequences and issues will naturally arise. This is due to an evolution of their use from the original intent and the adoption of technologies by less tech-savvy individuals. Despite the current push in Congress for stronger social media regulation, policymakers might not be the best equipped to solve this problem. As with other technologies, consumers will usually adopt new social norms, and providers will adapt their products to tackle these novel issues. Policymakers’ attempts to regulate these issues away often result in rigid policies that ignore the nuances of content moderation, potentially eroding users’ privacy or reducing free expression online. Unlike government actors, platforms are usually better equipped to quickly respond to mistakes since they tend to have an incentive to quickly address user feedback and to do it right, as their livelihoods often depend on it.