Insight
September 3, 2025
The AI Action Plan: A Solid First Step, but the Path Remains Long
Executive Summary
- The Trump Administration recently released a document entitled “Winning the AI Race: America’s AI Action Plan,” a wide-ranging approach to artificial intelligence (AI) regulation that emphasizes deregulation and endorses pressuring states to refrain from passing their own AI laws by using federal fiscal power to influence their behavior.
- While some argue that reducing burdensome regulation and avoiding a patchwork of state laws is essential to creating a unified, innovation-friendly environment that promotes U.S. competitiveness in the global AI race, others caution that leaving key areas unregulated poses serious risks and emphasize the need for states to serve as laboratories of democracy.
- The administration’s AI Action Plan’s influence will be limited, however, without federal legislation; therefore, Congress should seek to create an AI framework that balances a light-touch regulatory approach with targeted safeguards to address specific harms that could arise from AI development and use.
Introduction
On July 23, 2025, the Trump Administration released a document entitled “Winning the AI Race: America’s AI Action Plan,” a national strategy aimed at securing U.S. leadership in artificial intelligence (AI). One of the plan’s objectives is to advance U.S. AI innovation by reducing federal regulatory barriers and putting pressure on states to refrain from passing their own AI laws. Notably, the plan puts together a federal approach – led by executive agencies – to identify and roll back regulations, rules, memoranda, and other policies that may hinder AI development. It also proposes limiting federal AI funding to states with restrictive regulatory frameworks, with the goal of preventing the establishment of regulatory burdens that could harm U.S. competitiveness in AI.
This document marks the latest federal push to foster a unified, innovation-focused regulatory environment and reflects a growing effort to preempt state-level AI laws – an increasingly debated policy area. Supporters argue that by minimizing regulatory barriers and fragmentation, the United States will be better able to preserve its leadership in AI, especially considering the wave of state-level AI legislation over the past two years that may hinder its development. Critics, however, point out that without regulation in critical areas, the public could face risks, and highlight the value of letting states act as laboratories of democracy for new technologies.
Nevertheless, the Action Plan’s suggestions of meaningful steps to protect AI innovation will be unenforceable without congressional action, highlighting the urgent need for Congress to establish a national AI framework that balances a light-touch regulatory approach with targeted safeguards to address specific harms. The first step, however, must be identifying where such safeguards are most needed, ensuring they mitigate risk without stifling innovation.
The AI Action Plan
The Trump Administration released “Winning the AI Race: America’s AI Action Plan,” aimed at securing U.S. leadership in artificial intelligence. The plan outlines 103 federal policy actions across three pillars: “Accelerating Innovation,” “Building American AI Infrastructure,” and “Leading in International Diplomacy and Security.” Notably, within the first pillar is a deregulatory push to “remove red tape and outdated rules” that hinder AI development. In particular, the plan calls for a Request for Information from businesses and the public to identify federal regulations that may stifle AI innovation and adoption. It also directs the Office of Management and Budget (OMB) and federal agencies to identify, revise, and repeal regulations, rules, memoranda, and other policies that could stall development. Together, these steps reflect a clear departure from the previous administration’s safety-first posture, signaling a policy shift toward accelerating AI innovation.
This deregulatory effort is only a first step toward streamlining conflicting rules and easing regulatory burdens, and theoretically states would still maintain a wide reach to regulate AI. While the plan does not explicitly call for federal preemption, it proposes conditioning federal funding on states’ regulatory choices, a move designed to reduce state regulation of AI without directly calling for preemption. The intent is to prevent a patchwork of conflicting AI laws that could harm developers with onerous compliance burdens that slow down deployment.
Additionally, the plan takes positive steps on varied areas. Notably, it prioritizes innovation and outlines key areas for progress – including modernizing infrastructure, advancing AI use in government, updating export control strategy, supporting open-source models, expanding access to high-quality datasets, developing technical standards, and investing in workforce development. Whether this new strategy succeeds, however, will depend not only on executive influence but whether Congress steps in to create a national framework that’s both innovation friendly and targets specific harm.
The Trade-offs of Pushing for a Unified AI Framework
The AI action plan’s call for cutting regulations that hinder AI innovation and adoption comes with a mix of trade-offs. On one hand, supporters see value in a more unified federal approach, one that avoids a messy patchwork of conflicting state laws. Around 700 AI-related bills were introduced in states in 2024, and this number is expected to increase in 2025. Therefore, considering that today almost every small business uses AI, either for marketing, finance, cybersecurity, or everyday operations, a patchwork of law would create legal confusion, higher compliance costs, reduced access to tools, and legal uncertainty. Notably, one study showed that compliance – what companies do to ensure that a company’s AI systems and their use align with relevant laws, regulations, and ethical standards – could represent the 17 percent of the total cost to build an AI system. Thus, a complex regulatory landscape could increase costs on small businesses hindering their ability to innovate in AI.
A clearer federal framework could reduce these problems, giving big and small developers more room to innovate and helping the United States stay competitive, especially against countries such as China. Without burdensome regulations and a patchwork of state rules, AI companies would have greater clarity, enabling them to focus resources on research and development, rather than navigating state requirements.
The Case for State AI Regulation
Critics argue that leaving key areas of AI unregulated – and preempting states – might risk exposing consumers to harm, including to their privacy, such as personal data protections and security. Additionally, some have warned about the risks of limiting state efforts without offering a clear federal framework in return, given the slow pace of federal action to provide comprehensive AI legislation.
The approach also raises concerns about limiting the ability of states to test different approaches. States often act as “laboratories of democracy,” meaning they create their own rules and laws on issues the federal government has not addressed. States can tailor protections to local needs and test creative policy approaches that, if successful, can be used in other states or at the federal level.
The administration’s Action Plan does not directly call for preemption on state AI rules and laws, however, but endorses conditioning federal funding on states’ regulatory choices. Congress often leverages the disbursement of federal funds as an incentive for states to adopt certain policies by attaching conditions to those funds. States can either accept the money with its conditions or decline and retain their independence. But Congress’ power here is not unlimited. If conditions attached to federal funds are so punitive that states have no real choice but to comply, then what the federal government considers a mere “incentive” a court can consider unconstitutional coercion. For example, in National Federation of Independent Business V. Sebelius in 2012, the Supreme Court held it was coercive for Congress to threaten states with the loss of all Medicaid funds unless they expanded coverage to new populations.
Additionally, the AI Action Plan flags that the federal government “should not interfere with states’ rights to pass prudent laws that are not unduly restrictive to innovation,” leaving room for states to continue enforcing laws. Further, even with the AI Action Plan in place, states can continue to address AI-related harm through generally applicable laws, such as those targeting deceptive practices, civil rights violations, or fraud.
Ultimately, while outright federal preemption of state law may be a goal for some lawmakers, the AI Action Plan does not endorse such sweeping measures. That decision is ultimately left for Congress to make.
The AI Action Plan Is a Step Forward – But It’s Not a Framework
The AI Action Plan’s emphasis on deregulation and executive-driven action underscores a critical gap of legislative leadership. Only Congress has the authority to establish a durable, national AI governance framework, one that balances innovation with clear safeguards, protects consumers, and avoids a patchwork of conflicting state laws.
The AI Action Plan should be seen as a starting point. It identifies key priorities and may help identify where existing rules are inconsistent or outdated. As noted, however, it is not a substitute for legislation. Congress must now engage meaningfully with industry, researchers, and civil society to address the policy challenges AI presents. The goal should be a thoughtful, technically grounded regulatory approach that both supports innovation and addresses real-world risks.
Several models are under discussion, including a light-touch approach that minimizes early regulatory barriers to encourage experimentation and growth; national standards that offer unified, predictable rules for AI development and deployment; and risk-based frameworks that focus regulatory attention on high-impact use cases such as hiring, lending, or criminal justice. Each approach carries its own benefits and trade-offs. But, given the dual priorities of fostering innovation while ensuring safeguards where there is genuine potential for harm, a light-touch regulatory approach appears to be a promising means to achieve that balance.
Finally, considering that AI presents a pacing problem for regulation – with laws struggling to keep up with rapidly evolving technologies – a light-touch regulatory approach is well-suited to address this gap. Heavy, top-down rules risk becoming outdated quickly and may stifle the experimentation, creativity, and investment needed to advance AI.
Conclusion
The Trump Administration’s AI Action Plan is a step in the right direction, as it helps identify key priorities and endorses repealing or reforming existing laws that may hinder AI development. But its impact will be limited without congressional involvement. Congress should seek to set an AI regulatory framework that employs a light-touch approach, paired with targeted safeguards where risks are clear.





