Research

Understanding Calls for Regulating Artificial Intelligence

Elon Musk, Bill Gates, Mark Cuban, and the late Stephen Hawking have been among the most vocal luminaries calling for the regulation of artificial intelligence (A.I.), but they are hardly alone. Countless papers, conferences, and talks dedicated to algorithms and artificial intelligence call for the same. Without detailing the harms, or explaining how the market has failed, many tend to focus on proposals to tax, regulate, and limit artificial intelligence.

Embedded in these calls for new government power are countless uncertainties about the direction of technology. Yet, the track record of technology forecasts is far from stellar. One of the largest retrospective reviews of technology forecasts found that predictions beyond a decade were hardly better than a coin flip. In an analysis that focused specifically on A.I. predictions, the authors warned of “the general overconfidence of experts, the superiority of models over expert judgement, and the need for greater uncertainty in all types of predictions.” Predictions that general A.I. is just around the corner have failed countless times across several decades.

This uncertainty indicates a fundamental reality about A.I. It is a developing collection of technologies with a tremendous variety of applications. As a result, the goal for policymakers should not be a singular A.I. policy or strategy, but a regulatory and policy approach that is sensitive to developments within society, leaving room for innovation and change.

Terms and Origins

To understand artificial intelligence, it is helpful first to define terms, especially “narrow” A.I. and “general” A.I. Narrow A.I. references models built using real-world data to achieve narrow, specific objectives such as translating languages, predicting the weather, spotting tumors in chest scans and mammograms, and helping people identify caloric information just from pictures of food.

Narrow A.I. can be contrasted with general A.I., which refers to decision-making systems able to cope with any generalized task like a human. Arnold Schwarzenegger’s early 1990’s movies and, more recently, Samantha from the movie Her represent this kind of A.I. While some fret over the risks posed by super-intelligent agents with unclear objectives, task-specific A.I. holds immediate promise while general A.I. is still far from full realization.

The diversity of what one thinks of as A.I. extends beyond these categories, too. Machine learning is another commonly referenced term, which denotes a process whereby a machine analyzes data and learns without supervision. Yet the barriers between A.I. and machine learning and more standard computer programming are blurry. In practice, there often isn’t much difference between narrow A.I. and complex computer programming like machine learning. But there is an upside to this diversity, as Stanford’s “One Hundred Year Study on Artificial Intelligence” noted: “[T]he lack of a precise, universally accepted definition of A.I. probably has helped the field to grow, blossom, and advance at an ever-accelerating pace.”

This diversity stems from the technology’s democratic origins. As the Obama White House noted in its “Preparing For the Future of Artificial Intelligence” report,

The current wave of progress and enthusiasm for A.I. began around 2010, driven by three factors that built upon each other: the availability of big data from sources including e-commerce, businesses, social media, science, and government; which provided raw material for dramatically improved machine learning approaches and algorithms; which in turn relied on the capabilities of more powerful computers.

In January of 2010, the machine learning library, scikit-learn, was released to the public, democratizing the tools of A.I. and sparking the current rush. This program finds its genesis in Google’s Summer of Code programs, and many different companies and entrepreneurs have applied these tools in manifold ways. As Representative Will Hurd said in June, “The United States boasts a creative, risk-taking culture that is inextricably linked to its free enterprise system.”

Google, Facebook, Microsoft, and other large tech companies have played a large part in the development of A.I. While it has been popular of late to criticize the largest tech companies, policymakers should be comfortable with large firms such as Google, Facebook, and Microsoft taking the lead on A.I. implementation. Even though these companies have been lambasted for their size, they shouldn’t be penalized for adopting advanced technologies.

The democratic nature of A.I. development over the last decade means that there are a variety of experiments in the ecosystem, and shifting to A.I.-embedded processes will not be frictionless for firms or social institutions. As the American Action Forum has noted before, firms face significant practical hurdles in implementing A.I.-driven systems, as they aren’t cheap and most automation schemes fail to achieve any positive results. The same kind of implementation problems exist in government institutions as well. In the most comprehensive study of its kind, George Mason University law professor Megan Stevenson tracked Kentucky’s state-wide implementation of an algorithm meant to automate bail decisions by judges. While there were significant changes in bail-setting practices, over time these changes eroded as judges returned to their previous habits.

Regulation and A.I.

The shifting landscape and unclear implications of A.I. mean that policymakers should adopt three outlooks regarding narrow A.I. regulation.

First, A.I. is a general purpose technology, like electricity, the automobile, the steam engine, and the railroad, that will have a variety of regulatory impacts. A.I. isn’t going to converge industry regulation but make it more variegated. Thus, calls to impose a singular regulatory framework on A.I. are misplaced. Some industries might need clarity, others might need a shift in liability rules, and yet others might need additional consumer safeguards. Still, we are a long way from those deep societal impacts. In the near term then, policymakers should be on alert to the potential barriers that could hobble growth in A.I. application, which might necessitate the liberalization of rules.

Second, premature action is likely to be deleterious to A.I. innovation and progress, as privacy regulation in Europe has shown. A rush to legislate A.I. applications, and thus constrain and narrow them, would signal to investors and innovators that their time, money, and talents should be put elsewhere. Such a shift would be a real loss, as the opportunities for A.I. applications are enormous. The United Kingdom’s National Grid has turned to A.I. to reduce service outages. Facebook and MIT are using A.I. to give addresses to people throughout the world without them. And even the New York Times is getting into the game, by installing a recommendation feed for its users through A.I.

Regulatory restraint does not mean consumers are exposed to harms. Consumers can be protected if policymakers choose the route of soft law. As Ryan Hagemann, Jennifer Huddleston, and Adam Thierer explained, “soft law represents a set of informal norms, multi-stakeholder arrangements, and non-binding guidance standards that provide an adaptable alternative to more traditional regulations or legislation.” These approaches have been successfully applied to autonomous vehicles, Internet of Things, advanced medical technologies, FinTech, and electric scooters. Relying on these strategies would be a smart strategy for A.I. regulation.

As a final matter, policymakers should temper concerns about the ethical implications of A.I.  The terminator scenario of A.I. might be well known, but it is not indicative of the current hurdles that A.I. researchers face. Instead, practitioners tend to be concerned with more concrete obstacles, such as avoiding side effects that reward hacking, ensuring that there is scalable supervision, and stopping undesirable behavior during the learning process.

Moreover, countless organizations are dedicated to these ethical problems, such as Data&Society, the Ethics and Governance of A.I. Initiative, and the A.I. Now Institute, just to name a few. Companies are beginning to hire researchers focused in A.I. ethics and are creating internal ethics boards for A.I. Moreover, educational facilities are beginning to implement ethics within curriculum. As Computer Science Professor Yevgeniy Vorobeychik explained in a filing, “the vast majority of A.I. researchers already set public good, broadly construed, as their aim.” Policymakers should be optimistic about society’s ability to consider and act on A.I.’s ethical implications with both speed and nuance.

In short, policymakers should embrace regulatory restraint, although there are opportunities for policy to strengthen A.I. deployment.

Disclaimer