Comments for the Record

Comments regarding “Guidance for Regulation of Artificial Intelligence Applications”

Comments regarding “Guidance for Regulation of Artificial Intelligence Applications”

Agency: Office of Management and Budget

Comment Period Opens: 1/13/2020

Comment Period Closes 3/13/2020

Comment Submitted: 3/9/2020

Docket No.: 2020-00261

The Advantages of a “Soft Law” Approach to Governing Artificial Intelligence

I appreciate the opportunity to provide comments on the draft memorandum to federal agencies regarding Guidance for Regulation of Artificial Intelligence (AI) Applications. This comment does not represent the views of any particular party or special interest group but is intended to assist regulators in creating a policy environment that will continue to facilitate innovation in the AI space.

I applaud the memorandum’s emphasis on maintaining an approach that will preserve a “robust innovation ecosystem” while also protecting important values, including civil liberties. The proposed memorandum displays an important degree of regulatory humility that recognizes the potential role for state policies regarding such technologies while also acknowledging that restraint from regulatory action may be needed at times. Furthermore, the memorandum also reflects an awareness that existing regulations may prevent the development or adoption of certain AI technologies and that agencies might need to consider reducing such barriers. With these general principles in mind, this comment addresses the following from the memorandum:

  1. The importance of avoiding an overly precautionary approach to AI governance;
  2. The advantages of maintaining regulatory flexibility in governing emerging technologies such as AI; and
  3. Issues regarding federalism in approaching technology policy.

The Importance of Avoiding Over-Regulation of AI

The United States has emerged as a leader in many emerging technologies because it has avoided an unnecessarily precautionary approach. By intervening only to minimize real risks rather than to avoid all potentially plausible worst-case scenarios, an innovation-enabling permissionless approach reduces barriers to entry in a wide range of emerging technologies and enables Americans to fully receive their benefits.

Specific applications of AI have great potential as well as are already yielding many benefits  from improved traffic routing to medical diagnostic tests to fraud alerts from financial institutions.[1] Future AI applications could improve any number of industries, from agriculture to transportation and beyond.

Unfortunately, the emergence of new technologies is often accompanied by fears either due to concerns about disruption or the potential for abuse. While the benefits are already being experienced, others including Elon Musk and the late Stephen Hawking have expressed fears about the catastrophic potential of certain applications.[2] While such concerns seem to draw the most extreme dystopian conclusions, other concerns are about the way these systems could further exacerbate underlying bias. There are ways to narrowly address such concerns, either by improving the quality of data sets as well as encouraging transparency. A diverse workforce in AI or narrowly tailored regulations addressing the real and specific harms can help address some of these concerns.[3]

Rather than assume new regulation is needed, agencies should also examine if existing laws can address the problem. For example, existing anti-discrimination laws may already address these concerns and may only need updates to address specific harms or concerns, such as concerns about AI used in the criminal justice system, and broader discussions of needs for potential reforms.[4] The framework and approach laid out in the memo recognize these concerns but also look to fully encourage the potential benefits of AI. Such an approach understands that failure to develop a necessary level of public trust could deter the development of AI.

Even with these concerns, agencies should avoid a dystopian and precautionary view in their policies and instead seek to properly understand real risks and tradeoffs. As ITIF’s Daniel Castro and Michael McLaughlin point out, fear-based policies—for example, AI policies that treat the technology as too dangerous to allow (such as bans on certain types of the technology), policies that require a technology to first prove itself safe, or policies that have unnecessary regulations on the use of technology—can slow the research and deployment not only on the intended perceived harmful uses of a technology but also on advancement in the beneficial uses as well.[5] In some cases there may be highly probable, real, irreversible, catastrophic harms that require regulatory action to prevent, but regulation of otherwise beneficial technology should focus on such cases rather than broad concerns that might both fail to address the true harm and prevent positive uses.[6] To encourage agencies to avoid the precautionary approach, the draft memorandum is right to note that agencies should avoid an “approach that holds AI systems to such an impossibly high standard that society cannot enjoy their benefits.”[7]

As with the governance of many other technologies, this approach is in contrast to the European approach that focuses on minimizing any possible risks of new technologies rather than creating a policy environment of low barriers to entry that could maximize their benefits. As the Mercatus Center’s Adam Thierer noted, “Europe is doubling down on the same policy regime it used for the internet and digital commerce. It did not work out well for the continent then, and there are reasons to think it will backfire on them again for AI technologies.”[8] The memorandum encourages experimentation and innovation by remaining largely hands-off and prefers narrowly tailored responses and ex post remedies when possible, as opposed to requiring innovators to pursue government permission or work through slow processes that risk making an innovation outdated by the time it is approved.

Of course, as the memorandum notes, all regulatory policy involves tradeoffs.[9] Agencies should seek not to eliminate all risks, but rather to determine when such risks are likely to occur or which technology truly poses existential risks that outweigh the advantages of the improvements they bring. Even when considering these risks, regulators should consider how narrowly they can target their approaches as well as the potential consequences and risks these approaches themselves may have.

Regulatory Flexibility and Emerging Technologies

The memorandum reflects and encourages the general agency momentum away from more formal and strict regulation of to address many new and emerging technologies. The proposed memorandum recognizes that at times regulation may be needed, but that agencies should consider other non-regulatory approaches to address certain risks.[10] The three proposed examples of policy guidance and frameworks, pilot programs and experiments, and voluntary consensus standards all represent forms of “soft law” that have been successfully used to provided a regulatory framework that enables technologies in other areas such as autonomous vehicles.[11] A light-touch approach has many advantages: While it can provide a degree of certainty and address potential risks that might arise, it still maintains a degree of flexibility that does not overly constrain the development of a currently emerging technology.[12] Yet, there are legitimate concerns on how such administrative actions could devolve and further allow unchecked and unaccountable expansion of the administrative state.[13]

“Soft law” instruments such as voluntary consensus standards, pilot programs, and policy guidance have been useful in governing emerging technologies because of their timeliness as well as their flexibility. “Soft law” tools can overcome the “pacing problem,” where technology moves faster than the policymaking that governs it, while maintaining a flexible approach that allows for the policy framework to evolve along with the technology.[14] In some cases this “problem” can be a benefit by allowing technology to reach the general public quickly rather than becoming stymied in red tape; in other cases, however, particularly for already-regulated industries, such as emerging transportation like drones or autonomous vehicles, the inability of policy to adapt can delay the deployment of new technologies.[15] The memorandum properly identifies that soft law tools should be part of agencies’ potential approaches to AI.

The guidance also appropriately suggests that many AI technologies will require interagency cooperation. AI-related policies may overlap many agencies’ expertise, and cooperation will prevent the burdens and innovation-deterring confusion that could come from conflicting agency actions or guidance.[16] At the same time, the type and nature of regulation will likely need to be varied by agency. As Will Rinehart previously noted, “AI isn’t going to converge industry regulation but make it more variegated. Thus, calls to impose a singular regulatory framework on AI are misplaced. Some industries might need clarity, others might need a shift in liability rules, and yet others might need additional consumer safeguards.”[17] Encouraging interagency cooperation will limit the chances that these varied uses create conflicts that burden the development of AI technology in responding to these different industry needs.

This memorandum represents  the light-touch approach that has allowed the United States to be a leader in many emerging technologies. Additionally, the ability to utilize such an approach should be considered not only for new regulatory regimes that could support innovation in currently emerging technologies, but also for the potential to remove red tape and barriers to entry in currently regulated areas.[18]

The Role of Federalism in Governing Emerging Technologies

The draft memorandum conveys that agencies should apply principles of federalism—i.e. they should not automatically prevent existing or potential actions by state and local governments but address those “inconsistent, burdensome, and duplicative State laws that prevent the emergence of a national market.”[19] In many cases, states have certain advantages that can encourage technological innovation and novel policy tools for governing it.[20] Yet some technologies also naturally require a federal framework due to their borderless nature and impact on other states.[21]

States may be able to show the impact of different regulatory approaches for emerging technologies; nevertheless, a patchwork of state laws risks disrupting the overall federal approach to a technology. States may be able to encourage experimentation or novel policy governance through tools such as sandboxes, pilot programs, or broader deregulatory action.[22] This can be seen in the way some states have been able to create programs for FinTech and autonomous vehicles that enable experimentation for both technology without federal intervention.[23] Of course, the state-by-state approach can also be disruptive and limit the development of technology by creating overregulation or a disruptive patchwork.[24] In these cases, federal preemption may be necessary to encourage the continued development of important technologies such as AI. In other cases, however, a state’s policy choices may have an impact only within its borders and may result in competing regulatory models that can help other policymakers observe the potential impact of different policy approaches and determine how best to encourage innovation and minimize risks.

In the case of AI applications, some specific, focused policies and applications may be possible at a state level, but many principles and applications will require a federal framework to prevent innovation-deterring disruption.


The draft memorandum promotes a regulatory approach that is currently being used in other emerging technologies such as autonomous vehicles. It also continues the permissionless approach that enable the United States to be a leader in innovation in many aspects of the digital economy as opposed to the consequences of more heavy-handed and precautionary European approach.  This approach applied to AI governance discourages a rush to overly precautionary regulation and encourages regulatory humility both in the need for regulation and the type of tools utilized.

[1] Adam Thierer & Jennifer Huddleston Skees, Finding Our Humanity with AI, U.S. News, Jan. 2, 2018,

[2] See Catherine Clifford, Hundreds of AI Experts Echo Elon Musk, Stephen Hawking in Call for Ban on Killer Robots, CNBC, Nov. 8, 2017,

[3] See West, S.M., Whittaker, M. and Crawford, K. (2019). Discriminating Systems: Gender, Race and Power in AI. AI Now Institute. Retrieved from discriminatingsystems.html.

[4] Caleb Watney, Fairy Dust, Pandora’s Box…or a Hammer, Cato Unbound, Aug. 9, 2017,

[5] Daniel Castro & Michael McLaughlin, Ten Ways the Precautionary Principle Undermines Progress in Artificial Intelligence, Information Technology & Innovation Foundation, February 2019,

[6] See id.; Adam Thierer, Permissionless Innovation at 34.

[7] Memorandum for the Heads of Executive Departments and Agencies on “Guidance for Regulation of Artificial Intelligence Applications” at 2

[8] Adam Thierer, Europe’s New AI Industrial Policy, Medium, Feb. 20, 2020,

[9] Memorandum for the Heads of Executive Departments and Agencies on “Guidance for Regulation of Artificial Intelligence Applications” at 4

[10] Memorandum for the Heads of Executive Departments and Agencies on “Guidance for Regulation of Artificial Intelligence Applications” at 6

[11] Ryan Hagemann, Jennifer Huddleston Skees, & Adam Thierer, Soft Law for Hard Problems: The Governance of Emerging Technologies in an Uncertain Future, 17 Colo. Tech. L.J. 37 (2019).

[12] Id.

[13] See Clyde Wayne Crews Jr., Mapping Washington’s Lawlessness: An Inventory of Regulatory Dark Matter, Competitive Enterprise Institute,

[14] Adam Thierer, The Pacing Problem and the Future of Technology Regulation, Aug. 8, 2018,

[15] See id.

[16] Jennifer Huddleston, Disrupting Deference for Disruptive Technology, CSAS Working Paper 19-35, available at

[17] Will Rinehart, Primer: How to Understand and Approach AI Regulation, American Action Forum, Jan. 10, 2019,

[18] See Jennifer Huddleston, The Future of Micromobility May Require States to Rethink Old Laws, Jan. 22, 2019, (discussing such an issue in the regulation of scooters and other micromobility).

[19] Memorandum for the Heads of Executive Departments and Agencies on “Guidance for Regulation of Artificial Intelligence Applications” at 2

[20] Jennifer Huddleston, Soft Law and Emerging Technology in the States, The Journal, Fall 2019, available at

[21]  Jennifer Huddleston and Ian Adams “Potential Constitutional Conflicts in State and Local Data Privacy Regulations”, released by the Regulatory Transparency Project of the Federalist Society, December 2, 2019 (

[22] Jennifer Huddleston, What States and Cities Do Right to Promote Innovation, Oct. 9, 2018,

[23] See id.

[24] See Huddleston & Adams, supra note 21.