Trump Administration Outlines Approach to Regulating AI


  • The Trump Administration’s recent draft guidance on regulating artificial intelligence (AI) outlines a light-touch approach, recommending that agencies avoid attempting to regulate all risk out of the market.
  • The guidance should allow the market to continue to drive innovation in AI technologies, as well as establish an even playing field among states.
  • While the administration’s policy recognizes that there will be failures as the technology progresses, it anticipates that possible benefits will outweigh risks.


Whether in driverless cars, automated assistants, or other applications that have yet to be developed, artificial intelligence (AI) holds promise for improving lives and increasing economic productivity. Yet while the technology is still being developed, the government’s regulatory approach has been largely nonexistent until recently. How federal agencies regulate AI has enormous implications for how much society will benefit from innovation in this area.

To help develop a consistent, government-wide policy on AI, the Trump Administration recently released a draft guidance memorandum with principles for how federal agencies should regulate AI. The document outlines what federal agencies should consider when creating regulatory actions that impact AI applications either directly or indirectly. While this memorandum is only a draft currently, the administration will finalize it following a period of public comment, as Executive Order (EO) 13,859, “Maintaining American Leadership in Artificial Intelligence” (issued in February 2019), requires that the Office of Management and Budget director issue such a memorandum.

The Trump Administration outlines a light-touch approach to regulating AI. The administration’s goal, according the EO, is to “sustain and enhance the scientific, technological, and economic leadership” of the United States. Accordingly, the memorandum sets out a vision that justifies regulation of AI on an as-needed, rather than precautionary, basis that allows the private market to drive innovation. This analysis briefly explains the guidance and explores its implications on the regulatory process and regulations in general.


The core of the Trump Administration’s policy on regulating AI is that, in order to foster innovation and growth in AI, regulators should avoid a precautionary approach. In other words, they should avoid attempting to regulate all risk and uncertainty out of the market. Agencies should also remove or modify existing regulations that act as barriers to AI development. Regulation should only be used in situations where it is demonstrably needed or to ensure that state and local governments do not create a de facto national standard that would chill innovation.

This core policy resonates through 10 “principles for the stewardship of AI applications” that can be sorted into three areas. Area one pertains to public trust. The administration notes that federal agencies should weigh the risks of AI betraying the public’s trust — including by violating privacy, individual rights, autonomy, and civil liberties — against the possible benefits of AI. The administration advises that regulatory or non-regulatory responses to these risks should be evaluated on a case-by-case basis. It also recommends agencies include plenty of opportunities for the public to weigh in on possible actions, which it believes will help build public trust.

A second area aims to ensure that agencies use analytical rigor and quality information to inform their decisions on how, or whether, to regulate. This requirement includes upholding scientific integrity, assessing risks, weighing benefits and costs, and pursuing approaches that are performance-based and not overly prescriptive.

The final area deals with upholding certain values. The principles here lead toward regulatory decisions that are fair and non-discriminatory, transparent, ensure safety, and are well-coordinated among agencies.

Beyond the 10 principles, the memorandum also emphasizes non-regulatory approaches to addressing AI, including sector-specific guidance, pilot programs such as regulatory sandboxes (that allow experimentation in a controlled setting), and voluntary consensus standards. Last, it focuses on reducing barriers to the deployment and use of AI, including providing access to federal data, encouraging agency involvement in the development of consensus standards, and ensuring regulatory approaches consistent with international cooperation.


The Trump Administration’s market-oriented approach to regulating AI, rather than a precautionary approach, is more likely to maintain an environment where companies and individuals can innovate. While clearly stating there is a role for regulation when it comes to AI, the administration is focused on getting out of the way and allowing private companies to innovate and compete on a level playing field. The likely result is that AI innovation will increase productivity, and thus economic growth.

In addition to the economic benefits, there is another compelling reason to avoid taking a precautionary approach to regulating AI: potential safety improvements. This reason is particularly true in the transportation sector. While this draft memorandum is the first step in creating a more formal policy toward AI regulation, federal agencies in recent years have taken regulatory steps that prove the value of a light-touch rather than precautionary approach.

One such example is the Federal Railroad Administration’s (FRA) May 2019 decision to withdraw an Obama Administration proposed rule that would have required at least two crew members on trains.  The FRA decided to withdraw the proposal because of a lack of evidence that it would improve safety. The FRA also recognized that requiring a certain crew size on a train would inhibit development of automated rail technologies that could, in time, prove to be safer than those operated by humans. Further, the FRA used the withdrawal to assert that its decision preempts the many state efforts to mandate crew size, which would create a de facto national standard (though litigation in this area is ongoing).

Some criticize the Trump Administration’s plan for being too vague to serve as a proper guide for agencies when they try to determine if or how to regulate emerging AI. It is consistent with a policy laying out a light-touch regulatory approach that the policy itself is light touch. The Trump Administration is deferring to agency expertise to determine what kind of regulatory approach is appropriate, or even if any regulation is needed. Mandating that agencies take a certain approach would run counter to this policy vision. Further, the memorandum’s emphasis on ensuring the public’s trust and upholding important values such as fairness and transparency should provide agencies enough direction and information to avoid reckless decisions.


The Trump Administration’s preference for a light-touch regulatory approach should ensure AI innovation continues. A precautionary approach, in contrast, would likely unnecessarily chill advances in the technology. The policy outlined in the recent draft guidance recognizes that there will be regulatory failures as the technology progresses, but the benefits arising from limited intervention will outweigh the risks. The policy is not purely laissez faire, however: It also establishes principles that agencies should follow to safeguard the public’s trust in AI and important individual rights.