The Daily Dish

Artificial Intelligence and Regulation

Eakinomics: Artificial Intelligence and Regulation

Artificial Intelligence (AI) is one of those buzzwords that gets tossed around a lot, raising fears of massive job displacement and even fundamental threats to society. Indeed, the exact fears are often inchoate and difficult to identify. Simultaneously, and perhaps causally, there are calls to regulate AI — notably by visible figures such as Elon MuskBill GatesMark Cuban, and the late Stephen Hawking.

But to me the rush to regulate seems overblown. After all, what could go wrong?

Here are a few insights, largely gleaned from conversations with, and the work of, AAF’s non-artificial intelligence (NAI) Will Rinehart. To begin, it is useful to distinguish between “narrow” AI and “general” AI. Narrow AI are models and mechanisms built upon experience and real-world data to achieve very specific objectives such as translating languagespredicting the weather, and reading medical scans. Narrow AI doesn’t seem to raise any new regulatory issues. If it were desirable to limit or preclude the objective, it would already be regulated; how it gets done doesn’t matter so much.

General AI are decision-making systems able to cope with a generalized task in the same way a human does. This seems to be what most people associate with the term AI; a good example is Samantha from the movie Her. Since general AI is genuinely moving the boundaries of activities, it may merit some new regulatory initiatives — but most likely not today; general AI is so far from being ready for deployment that it need not be regulated at present.

Of course, that will not stop everyone. An expert or pundit will generate an apocalyptic prediction about how general AI will evolve and there will be calls to regulate it to a standstill. This would be a mistake because those predications are likely way off the mark. A retrospective review of technology forecasts found that predictions beyond a decade were hardly better than a coin flip, and a similar analysis focused on AI predictions warned against “the general overconfidence of experts.”

The uncertainty is a fundamental reality about AI and it is similar in character to other uncertainties that regulators face — the evolution of technologies, the structure of markets, the scale and scope of firms, and so forth. In the latter circumstance, regulatory forbearance is the right strategy. It is with AI as well.

Disclaimer

Fact of the Day

Americans pay between 64 and 78 percent of worldwide pharmaceutical profits, despite the United States accounting for only 27 percent of global income.

Daily Dish Signup Sidebar