Insight

“Big Is Bad” Is Bad for AI

Executive Summary

  • The Federal Trade Commission (FTC) recently launched an inquiry into generative artificial intelligence (AI) investment to see whether “investments and partnerships pursued by dominant companies risk distorting innovation and undermining fair competition.”
  • The development of large language models and advanced machine learning capabilities benefits significantly from the resources that big tech firms can provide, such as leveraging their access to data, computing power, and engineering expertise.
  • Adding overly strict regulatory roadblocks could throttle American AI leadership, jeopardizing both our economic growth and national security as nations across the globe race to lead in developing the technology.

Introduction

American technology companies currently lead the world in artificial intelligence (AI) research, development, and deployment, but overly strict regulatory policies could jeopardize growth in the sector. The Federal Trade Commission (FTC) recently launched an inquiry into generative AI investments and partnerships to examine whether “investments and partnerships pursued by dominant companies risk distorting innovation and undermining fair competition.” The inquiry focuses on some of the largest tech firms in the country, including Alphabet, Amazon, and Microsoft, which have all drawn scrutiny from the Biden Administration’s antitrust enforcers in other matters unrelated to concerns about AI and innovation.

While the Biden Administration pursues an approach to competition policy characterized by hostility to market concentration and large firms broadly – or “big is bad” – it often ignores the significant benefits that come with firm size. Previous American Action Forum (AAF) work highlights competitive benefits that come with size, as well as how those benefits often improve both quality and price of goods and services for consumers. Size also allows firms to develop products and technologies that would be prohibitive for smaller firms with fewer resources. In the landscape for AI development, large firms compete with one another to quickly and efficiently develop new foundation models, such as OpenAI’s GPT-4 and Alphabet’s Gemini Ultra, and other AI tools. Leading firms can leverage their existing consumer data, computing power, and engineering expertise to create general-purpose models and establish new ecosystems for smaller developers and new use-cases to flourish. These firms can also commercialize models, as they best understand the needs of their large consumer bases.

The Biden Administration should carefully review the state of competition in AI and examine transactions as they arise, but any intervention into the market should target specific competitive harms and not prevent growth for the sake of preventing concentration. Nations across the globe are currently racing to lead the world in AI, not only for economic growth but in the interest of national security. Preventing firms from achieving the necessary scale to fully develop models could set the United States back in both vectors.

Concentration in AI Development and FTC Study

AI generally refers to the use of computers and algorithms to mimic the problem-solving and decision-making capabilities of the human mind. In that regard, AI isn’t a new concept, dating back as far 1950s. Developments of AI models, especially generative and large language models used to generate audiovisual or textual outputs, require significant resources, most notably data, computing power, and engineering expertise to develop models. As a result, only larger technology firms have been typically able to invest in and develop AI, raising potential antitrust concerns, as some fear these firms will dominate the AI market moving forward.

To address these concerns, the FTC recently announced a 6(b) study into generative AI investments. Section 6(b) of the FTC Act authorizes the agency to conduct studies into the market trends and business practices of firms, potentially developing materials for the FTC to use in future formal enforcement actions against the companies operating in the market. The FTC sent compulsory orders to Alphabet (Google’s parent company), Amazon, and Microsoft, as well as AI firms Anthropic PBC (partnered with Amazon) and OpenAI (partnered with Microsoft), to provide the FTC with information regarding recent investments and partnerships to help the agency better understand these relationships and their impact on the competitive landscape.

While competitive harms could arise, and the FTC does have a role to play in protecting competition in AI development and deployment, the agency appears to be taking an overly precautionary approach to the technology. As FTC Chair Lina Khan put it, “[a]s companies race to develop and monetize AI, we must guard against tactics that foreclose [new markets and healthy competition].” Rather than focusing on the development of the technology and how to ensure consumers benefit, the FTC appears to be focusing instead on ensuring that no company grows too large, regardless of the competitive effects. This approach is especially problematic in a field where many smaller firms and models can outcompete existing incumbents on narrow tasks, providing competitive restraints on potential monopolistic behavior.

AAF has written extensively on the flawed logic of this approach to competition policy generally. If firms use anticompetitive conduct to acquire or maintain a monopoly, illegally coordinate with rivals, or merge with firms in a manner that would substantially lessen competition and harm consumers, regulating agencies can and should step in. Going after firms simply due to concerns about concentration in and of itself could jeopardize AI development in the United States, however.

Benefits of “Big Tech” in AI Development

Generally speaking, larger firms with more resources have an advantage when researching and developing new technologies. AI development, for example, requires data to train and deploy the model, computing power to run queries, and engineering expertise. While startups and smaller firms can develop models with limited access to data, computational power, or labor, leveraging the resources of large technology firms can improve the quality and functionality of models.

Training data is the initial data set used to train AI models on how to make a prediction or perform a desired task. The more robust the training data set, the better the output, as the model can draw from more data sources. Large technology firms have access to more data on their own consumers, and access to more sensitive data is generally restricted for consumer protection. For smaller firms, partnering with large firms can provide more access to this data, and ultimately improve model accuracy and diversity.

At the same time, large firms have access to more data regarding the products and services that their consumers use. This allows the firms to better commercialize the models into products and services to improve the consumer experience. A model may have general applications, but large firms can integrate specific models into their existing products and offerings, improving competition and the quality of the existing products.

Computation power, likewise, is largely concentrated among large technology firms, and generally refers to both the software and hardware necessary to run the computations to perform a task. Computation includes things such as graphics processing unit chips, software to enable use, and infrastructure in data centers. Because developing computing power is resource intensive and costly to startups, large technology firms are often the only providers of many of these components. Large firms generally can also benefit from economies of scale to develop and deploy targeted models for specific tasks.

Finally, large technology firms often have significant in-house expertise, meaning both the big tech firm and smaller partners can benefit if the model development runs into challenges. Acquiring talent in AI development is not easy, but by leveraging the expertise of these firms, we can expect U.S. models to improve at a much faster rate.

Taken together, it is unsurprising that “big tech” is quickly growing in AI leadership and partnering with smaller firms to further develop AI models. There are many benefits to this mode of AI development, as progress will increase as firms leverage data, computing, and expertise of large technology firms.

Geopolitical Risks of “Big Is Bad” Antitrust Enforcement

A precautionary approach to AI that focuses solely on concentration can unnecessarily harm U.S. leadership in the development and deployment of AI, which can have significant implications for economic and national security.

First, leading on AI derives the economic benefits from . In 2020, global revenue from AI software, hardware, service, and sales totaled around $318 billion. By 2026, that number is expected to nearly triple, reaching $900 billion. By 2030, AI could contribute $15.7 trillion to the global economy, impacting a wide range of industries and potentially raising the total factor productivity, a key driver to sustained, long-run economic growth. The United States and China stand to benefit most due to the countries’ existing leadership on the issue, but if the United States stalls out the growth of its AI industry, international rivals could fill the gap, meaning new companies and developers will move to foreign jurisdictions to maximize revenues.

Second, leading on AI promotes national security. For example, the Department of Defense is using AI to autonomize weapons and equipment, employ facial recognition tools for analyzing intelligence, and provide recommendations on the battlefield such as where to target missile strikes. While such investments will almost certainly continue, having a robust AI industry in the United States better allows the federal government to implement these technologies. Similarly, cybersecurity firms use AI to better identify and eliminate cybersecurity risks in a system, preventing foreign cyberattacks from causing major disruptions to U.S. networks. AI tools can even be used in disinformation and destabilization campaigns, and ceding leadership to foreign adversaries limits our ability to identify and respond to the use of generative AI online.

Finally, leading on AI can help shape the future of the Internet. The United States and China have fundamentally opposing views regarding free speech online. While many in the United States have expressed concerns about online interactions, the U.S. approach to the Internet has led to the most robust information exchange in world history, allowing individuals to connect and share ideas and information at a scale never before achieved. China, meanwhile, uses its centralized power to restrict the free flow of information and limit what its citizens can see. AI tools will largely shape the future of the Internet, especially as platforms increase in scale and size. Ceding leadership to China or other autocratic regimes could jeopardize the future of the Internet writ large as these tools are deployed to limit the spread and access to information.

Conclusion

The FTC’s 6(b) study could provide valuable insights into the competitive landscape in AI, and regulators should enforce the laws as they apply to these firms. At the same time, the Biden Administration should be careful to avoid an overly restrictive regulatory approach to AI development that could jeopardize U.S. leadership in the technology.

 

 

 

 

Disclaimer