Primer: Why Algorithms Are Important to Public Policy

What is an algorithm?

Algorithms are best thought of as digital recipes. They perform an operation, in a logical order, and have a dependable outcome. While Google Search Results and the Facebook News Feed are the best known, algorithmic processes have been adopted widely. For example, they help establish credit scores, are used in GPS, and help us along when we buy airline tickets online.

The term algorithm is often used interchangeably with machine learning, artificial intelligence, and in colloquial use, the distinctions among the terms are often murky. Because algorithms are a set of rules, they entail a specific set of operations. On the other hand, machine learning refers to a set of techniques that give computers the ability to learn without explicitly being programmed. Machine learning is thus one kind of algorithm and also a version of narrow artificial intelligence. AAF has previously written on policies to foster the growth of artificial intelligence.

Algorithms begin and end with human interaction. Individuals are necessary to both start the process and do something useful with the outputs. With ever expanding data sets, algorithmic usefulness is proliferating. For example, a Japanese cucumber farmer is using machine learning to more quickly sort the crop, reducing work time. In health care, algorithms are also being tested to help reduce antibiotic use, get seniors the care they need, and improve heart and lung transplantation procedures. In education, they are being tested to make engineering more accessible, and could be used to tailor educational opportunities to the needs of students. In the near-term, many industries ready for disruption could see rising quality and a reduction in cost with the implementation of algorithms.

Why should policymakers care about algorithms?

Algorithms offer promise for governmental agencies, allowing for higher quality services as well as better decision making. For example, the Securities and Exchange Commission has taken a more active role in spotting suspicious trading, aided by algorithms that track frequency and destination of trades.

There are also perils. Some worry that these new techniques will bias results in a manner that isn’t easily accessible by citizens and consumers. But it is often difficult to untangle already existing social tendencies that will merely be reflected in the algorithmic outputs with decisions made in the construction of an algorithm. Context matters and policymakers should be sensitive to the institution that is deploying the algorithm and the legal context that the algorithm operates in. While we should examine all kinds of bias, especially in the private sector, government misuse of algorithms poses the greatest immediate threat to liberty since these algorithms go into sentencing and policing policy.

Policymakers will need to consider the framework under which these tools will be regulated and should not be too quick to regulate in a manner that will limit innovation in this space. After realizing an algorithm’s error, people often avoid them. Even in cases where algorithms perform better than humans, most have an algorithm aversion, as research has shown. We should resist the temptation to defer to faulty human judgement, especially if detrimental choices will be implemented instead.

Any new regulation should undergo a three part test, which should also apply to algorithm-related proposals:

  1. Prove the existence of market abuse or failure by documenting actual consumer harm, following the approach set by the Federal Trade Commission;
  2. Explain how current law or rules are inadequate, and show that no alternatives exist including market correctives, deregulatory efforts, or public/private partnerships to solve the market failure; and
  3. Demonstrate how the benefits of regulation will outweigh the potential countervailing benefits, implementation costs, and other associated regulatory burdens.

What kind of legal framework currently exists for algorithms?

A number of laws and a body of legal decisions limit and regulate private industry in their use of algorithms. While difficult to summarize, this body of law allows for sensitive information like health and credit data to be protected while still granting some innovation to take place. Many commercially deployed algorithms aren’t being used in sensitive areas of an individual’s life. Instead, companies like Google and Netflix are using these processes to suggest content to consumers. As such, algorithms have been accepted in the court of law as free speech, according to a number of court cases. Google successfully argued in Search King, Inc. v. Google Tech., Inc. that the First amendment applies to its search results. Even if algorithms themselves are not considered free speech, their outputs clearly are, and will rightly face significant hurdles in court if agencies regulate them in the future.

What questions are important to consider for this debate?

While we are still in the early stages of this changeover in computing power, policymakers will be faced with a series of important questions in the follow years:

  • For private entities, what is the harm that is continually occurring with the use of this algorithm?
  • Is that harm an intangible harm or a tangible harm?
  • What does the current legal regime have to say about the algorithm’s context?
  • Is the algorithm operating in a space that already faces stiff regulation, like health, education, and finance?
  • What does the 1st Amendment have to say?
  • What potential effects will regulating this tool have on innovation?
  • What were the outcomes of decision making process both before algorithmic method was implemented and after it was implemented?
  • How did expert bias change due to the new algorithmic decision making process?
  • Could agency misuse pose an immediate threat to liberty?