Insight
November 21, 2024
Primer: A Look at the Biden Administration’s Approach to AI Regulation
Executive Summary
- As Congress failed to pass comprehensive artificial intelligence (AI) legislation over the past four years, the Biden Administration turned to the administrative state to develop agency-level frameworks and guidelines mainly leveraging the existing authority of federal agencies.
- Meanwhile, without a single federal standard to preempt them, many states have implemented a wide variety of policies designed to regulate different aspects of AI, particularly in the areas of government use, civil rights, intellectual property, and transparency – although this patchwork of state AI regulations is likely to add costs and confusion to the industry.
- The incoming Trump Administration, working with Congress, faces the choice of pursuing a comprehensive federal law that preempts state laws or adopting a more sector-specific, decentralized approach that emphasizes flexibility and adaptability.
Introduction
President-elect Donald Trump’s opportunity to build upon or reshape the Biden Administration’s artificial intelligence (AI) regulatory approach raises questions about the future direction of AI regulation in the United States. In the absence of comprehensive legislation from Congress, the Biden Administration sought to craft federal AI policy through executive actions and agency-led initiatives. In particular, the Executive Order (EO) on Safe, Secure, and Trustworthy AI and the Blueprint for an AI Bill of Rights was designed to establish high-level principles around the use of AI, including ethical and responsible AI development and application, intended to target the core issues of safety and security, privacy, and consumer protections. This approach provided federal agencies with substantial flexibility to shape guidelines around the principles established in the Biden Administration’s AI initiatives, but without congressional support, the impact of this approach will be limited, and its future under a new administration is unclear.
Meanwhile, since there’s no one big federal standard, many states have begun to roll out their own AI regulations. States have introduced more than 500 AI-related bills over the past two years, of which more than 60 have been enacted. The state-level efforts cover a broad range of concerns, with a significant portion of the bills focusing on government use of AI, preventing deceptive content in elections, and education initiatives that seek to leverage AI for both public and private-sector advancement, promoting AI literacy, and supporting training initiatives. States are thus playing a pivotal role in AI governance, addressing specific regional needs.
Nevertheless, a patchwork of state AI regulations are adding costs and confusion to the industry, and the Trump Administration, working with Congress, faces the choice of pursuing a centralized, comprehensive framework for AI regulation or a more sector-specific and decentralized approach to AI that supports flexibility and adaptability. Yet while a flexible approach could allow for innovation and growth in the industry, depending on how rigorous the regulations are, it also risks creating a fragmented regulatory landscape, potentially resulting in inconsistencies and ambiguities that harm the industry. This primer examines the AI policy initiatives undertaken during the Biden Administration and outlines the regulatory paths available to the incoming Trump Administration.
AI Policy Initiatives During Biden’s Administration
The Biden Administration
Over the past four years, the Biden Administration has prioritized implementing high-level principles for the use of AI, including responsible and ethical AI development, that prioritize safety, civil rights, and innovation. One of the most significant initiatives was the president’s 2023 EO on Safe, Secure, and Trustworthy AI, a broad directive that mandates government agencies to evaluate AI’s impact, calls for federal standards to ensure the safe deployment of AI, prompts for protections for Americans’ privacy, advances civil rights, and mitigates potential harms such as job displacement and bias that AI could bring to workplaces and communities. The EO, however, has drawn significant scrutiny for empowering the administrative state to implement a wide range of regulations that could hamstring AI development in the United States.
In addition to the EO, the White House introduced the Blueprint for an AI Bill of Rights, a framework to guard against algorithmic harms, such as discrimination and privacy breaches, across multiple sectors. While nonbinding, this blueprint serves as a guideline for federal agencies, encouraging the responsible development and deployment of automated systems. Yet its voluntary nature led to some criticism, with advocates arguing that stronger, enforceable protections are necessary to curb potential harms by AI systems.
Federal Agencies
Largely under direction from the EO, federal agencies have rolled out a variety of initiatives to manage and promote AI. The National Institute of Standards and Technology (NIST) has led the way with the AI Risk Management Framework, which offers industry practitioners a roadmap for safe and ethical AI use without enforcing rigid mandates. Additionally, NIST’s AI Safety Institute has ramped up efforts to research and implement practices to address AI risks across critical areas, from national security to individual rights, aligning closely with the priorities outlined in the AI EO.
The National Science Foundation also has prioritized AI development with the National AI Research Resource pilot, aiming to bolster AI research and education to help the United States maintain its competitive edge in AI development. Across other agencies, action has been substantial, though limited by existing legislation. With no new congressional mandates, agencies are confined to working within their current authority – whether establishing new rules under existing laws or issuing updates and interpretations on AI-related matters. While this approach isn’t exhaustive, it has laid a foundation for AI policy across the federal landscape that the next administrative could build upon.
Congress
At the congressional level, AI policy has been largely shaped by bipartisan efforts, with more than 120 AI-related bills introduced in Congress. Although most of these bills will never make it into law, they reflect policymakers’ current concerns with AI advancements. Moreover, while most AI-related bills introduced in Congress have received bipartisan support, implementation strategies often become divisive, as lawmakers differ on regulatory approaches, funding priorities, and determining which AI issues should be addressed first, further complicating efforts to pass comprehensive AI laws. These divisions in part explain the Biden Administration’s decision to instead rely on agency-led efforts to set AI policy.
Based on the American Action Forum’s AI legislative tracker, there is a clear legislative focus on two major categories: “mitigating harms” and “government use of AI.” Together, these categories make up more than 65 percent of all bills introduced, reflecting policymakers’ primary concerns about the ethical and transparent deployment of AI and its responsible use in federal operations. The “mitigating harms” category alone represents 49 percent of the tracked legislation, with several bills seeking to restrict the influence of AI-generated content in elections, some focusing on transparency requiring AI-generated content to be labeled or watermarked to minimize public deception, and others addressing civil and privacy rights and intellectual property protections.
The “government use” category, composing 19 percent of the bills, centers on the strategic deployment and regulation of AI within federal agencies, aiming to boost effectiveness while safeguarding civil rights. Beyond harm mitigation and government use, other categories in federal AI legislation include research and development (13 percent) –focused on advancing AI capabilities through collaborative initiatives – enabling AI use (12 percent) – focused on AI integration and support for various sectors – and workforce development (8 percent) – which prioritizes preparing both public sector employees and the broader workforce with the skills needed to work alongside emerging AI technologies. This approach demonstrates a wide strategy whereby congressional action seeks to balance innovation with accountability, security, and civil protections.
While numerous AI-related bills are moving through Congress, significant legislative momentum has been building in the past few months. At the end of July, the Senate Commerce Committee advanced four AI-focused bills. Among these, the bipartisan Future of AI Innovation Act aims to maintain U.S. leadership in AI and emerging technologies, while the TAKE IT DOWN Act, also bipartisan, unanimously passed the committee and would criminalize the publication of non-consensual, sexually exploitative images, including AI-generated deepfakes. Following the Senate’s lead, the House Committee on Science, Space, and Technology passed nine more AI-related bills in September. These bills primarily focus on enhancing AI education, establishing standards for AI systems through NIST, and expanding the AI workforce.
State level
Lacking congressional action, at the state level, a proactive legislative stance has emerged. The AI state legislation tracker from the National Conference of State Legislatures highlights key areas of focus for states leading in AI policy. With more than 500 AI-related bills introduced over the past two years, and more than 60 bills enacted, New York, California, and Illinois are leading in the number of introduced bills, while Maryland and Utah have the highest numbers of enacted measures.
A significant portion of the state-enacted bills focus on government use of AI, with an emphasis on establishing oversight mechanisms, ensuring responsible use, and enhancing transparency to guide ethical integration in federal agencies. Elections are another major focus of state legislation, especially regarding issues of transparency, preventing deceptive content, and mitigating the risks posed by deepfakes. Education bills seek to leverage AI for both public and private sector advancement, promoting AI literacy, supporting training initiatives, and funding studies to assess AI’s impact on learning. Health and employment also feature prominently, with legislation geared toward exploring AI applications in health care, evaluating labor market impacts, and creating frameworks for ethical use in these sectors. Finally, several bills target private sector use to introduce governance standards, encourage responsible AI adoption, and address provenance and consumer transparency concerns.
Looking Forward
The Biden Administration’s reliance on the AI Executive Order and the AI Bill of Rights to guide federal agencies, coupled with the lack of comprehensive legislation to formalize these initiatives, leaves uncertainties about the future of AI regulation in the United States and its implications for the country’s AI development. While EOs play a critical role in directing agency operations and policy implementation, they are inherently impermanent – subject to revision or repeal by future presidents or nullification by Congress. As such, the 2023 AI Executive Order and the subsequent agency action on AI would require robust legislative backing to ensure their longevity and shield them from political shifts if the incoming Trump Administration seeks to take a similar approach to AI.
Therefore, the Trump Administration, in collaboration with Congress, now faces a pivotal decision: to develop a centralized, comprehensive AI regulatory framework or adopt a more sector-specific, decentralized approach that emphasizes flexibility and adaptability. President Trump has expressed intent to repeal Biden’s AI EO, but could choose to continue to impose regulations for specific harms, such as intellectual property issues or antitrust concerns. Alternatively, with full control of both chambers of Congress and the White House, the incoming administration may instead attempt to codify a de-regulatory approach to AI, especially if it wishes to preempt state laws.





