Insight
March 25, 2025
The Forthcoming Artificial Intelligence Action Plan
Executive Summary
- Last month, the U.S. Office of Science and Technology Policy released a Request for Information to guide the Trump Administration in crafting an “Artificial Intelligence Action Plan.”
- Thousands of responses were submitted, and five core themes emerged as significantly overlapping across organizations: export controls and U.S. leadership infrastructure investment, regulatory frameworks, ethical artificial intelligence (AI) development, safety and security, and intellectual property.
- Moving forward, the Trump Administration says it will use these comments and themes to develop an AI Action Plan that pinpoints policy measures to reinforce and expand U.S. leadership in AI.
Introduction
In a move to cement the United States’ dominance in artificial intelligence (AI), President Trump’s recent Executive Order on Removing Barriers to American Leadership in Artificial Intelligence will set the stage for an ambitious “AI Action Plan.” The order calls for the creation of this “action plan,” and as part of the process, the Office of Science and Technology Policy launched a Request for Information in February inviting the American public to share their comments on what the administration should prioritize in AI development.
Thousands of responses were submitted by a range of interested parties, from tech companies and business groups to think tanks and sector-specific advocates. While the comments cover a wide range of topics, five core themes emerged: export controls and U.S. leadership, infrastructure investment, regulatory frameworks, ethical AI development, safety and security, and intellectual property.
These comments will guide the Trump Administration in charting a new course for AI policy, likely distinct from the Biden Administration’s approach. While Biden leaned on executive orders and agency efforts – pushing ethical AI principles such as safety and civil rights in the absence of congressional action – his administration was also characterized by a mix of state-level rules that likely added increased compliance costs for the industry. Recent moves suggest Trump will lean toward a more adaptable strategy, aligning with the goal of maintaining U.S. dominance in AI with the use of free markets, increased research, and greater entrepreneurship. But as commenters have suggested, this might be challenging to achieve.
Moving forward, the Trump Administration will draw on these comments and themes to create an AI Action Plan that identifies key policy steps to strengthen and grow U.S. leadership in AI. This insight highlights the comments’ debates on core themes and examines some of the pros and cons of these proposals.
Export Controls and U.S Leadership
Most commenters emphasized the need for targeted export controls and measures to ensure the global competitiveness of American AI. For example, the artificial intelligence company OpenAI proposed an “exporting democratic AI” framework that promotes AI adoption in allied countries while restricting access to adversarial nations, particularly China. This approach is similar to the current tiered system of AI diffusion, but while OpenAI suggests the United States should maintain the access framework among countries in the global AI market, it also asserts that some modifications – which would incentivize AI deployment in line with democratic values and ensure that U.S. AI infrastructure remains secure – are necessary. Other companies, such as Anthropic, advocate stricter export controls on advanced AI chips, particularly to address potential loopholes that could allow adversaries to acquire critical computational resources.
Google and the U.S. Chamber of Commerce, among others, pushed for export control policies that strike a fair balance to keep U.S. businesses competitive without tying their hands. Both suggested adequately resourcing and modernizing the Bureau of Industry and Security – which enforces export controls to protect U.S. national security, foreign policy, and economic interests – to oversee the AI supply chain and prevent smuggling or unauthorized tech transfers to rival nations. The Software & Information Industry Association (SIIA) also highlighted the need for diplomacy, arguing AI is a tool for U.S. influence and warning that heavy-handed controls might harm U.S. competitiveness more than adversaries. Ultimately, the AI Action Plan that develops from these comments should try to find the appropriate balance in promoting U.S. competitiveness and limiting malicious actors’ access to U.S. research and technology.
Other measures suggested included more rigorous monitoring and scenario planning to assess the long-term impact of export controls and proactively exporting models to allied nations rather than focusing solely on restrictions. Overall, almost all parties agreed that export controls must be precise, but the challenge will hinge on how to balance security with economic and diplomatic priorities.
Infrastructure Investment
An overlapping theme of the comments centers on the need for robust foundational systems to support AI development, such as computational power, energy, and data systems. Google and OpenAI, for example, highlighted the need for large-scale investments in AI infrastructure, including energy reforms. OpenAI framed infrastructure investment as a key geopolitical tool, with the United States aiming to outpace rivals such as China in a high-stakes AI contest. Google instead made a much stronger case for energy reforms, sounding the alarm that today’s grid could fall short under tomorrow’s demands. To achieve these goals, the company suggested pushing for smarter power systems, upgraded transmission, and a jolt of efficiency to keep the lights on for AI’s growth.
Similarly, some commenters focused on data infrastructure. Palantir emphasized the importance of infrastructure to ensure AI’s precision in operational applications, particularly for government agencies. It recommended investing in testing and evaluation capabilities to validate AI systems before deployment and monitoring performance in real-world scenarios. SIIA also broadly noted that infrastructure is essential for unlocking AI’s potential, suggesting the need to stretch data centers across the country and maintain an energy grid to meet the growing demands of the tech sector.
Many commenters, such as Business Roundtable, suggested ways to increase investment, including by expediting project approvals for data centers and related infrastructure, shortening environmental review timelines, and providing preliminary feedback on application accuracy to reduce delays.
Regulatory Frameworks
Organizations repeatedly called regulations that balance innovation with efforts to address risks, with varying opinions about preemption and sector-specific needs. The U.S. Chamber of Commerce suggested clear rules to fuel AI investment, pressing the administration to provide clarity on existing regulations and harmonize legal frameworks so businesses don’t trip over conflicting policies. Business Roundtable similarly urged the federal government to preempt the patchwork of state laws by fostering a national AI framework. This would give companies a unified framework without resorting to heavy-handed compliance that could slow technological progress. R Street reinforced the need for a cohesive national AI policy framework and emphasized the importance of working with Congress to define regulatory authority over AI. The think tank argued state-by-state regulations could hinder innovation, as well. It also suggested keeping mandates light and leaning on flexible, voluntary standards, with National Institute of Standards and Technology (NIST) and other agencies leading the way.
Many industry-specific proposals were suggested, as well. The Bank Policy Institute (BPI) argued for banking-specific rules, spotlighting AI’s role in fraud detection, anti-money laundering, and risk management, and stressed the need for guidelines that reflect the financial sector’s specific challenges and opportunities. Meanwhile, the News Media Alliance recommended policies that shield journalism’s intellectual property from AI’s reach, arguing media needs tailored protections to keep creativity alive and competition fair.
Ethical AI Development, Safety, and Security
Encompassing all of the debates about AI development and competitiveness was a broader conversation about ethics, safety, and security. For example, the Center for Security and Emerging Technology argued that the United States should prioritize assessing risks, especially for AI that could step on privacy, safety, or legal rights. The Center for Democracy and Technology pushed for solid testing before and after systems go live, making sure everything works as promised and catching any biases that might affect vulnerable groups, though this would add costs to AI development. On privacy and security, it also highlighted that the use of AI in government could open data leaks or bigger targets for hackers, and thus they also suggest tying AI into the privacy and cybersecurity frameworks the United States already has.
Similarly, Anthropic is zooming in on national security, pressing the federal government to set up testing systems to dig into AI risks, including biological weapons or cyber threats. It wants the AI Safety Institute and NIST to work together to spot models that could be used for harmful purposes. The Center for a New American Security takes the middle road, arguing the United States needs rules that prevent disasters but don’t kill innovation. It’s keen on monitoring AI issues in real time and investing heavily in research to strengthen technology. It also emphasizes closer oversight of biotechnology given AI’s growing role in genetic design. The Center for AI Policy calls for mandatory outside audits on cutting-edge AI to make sure it doesn’t fuel terrorism. It also encourages the federal government to hold developers accountable when they exaggerate safety claims.
Intellectual Property
Finally, a core theme was the debate surrounding intellectual property in AI. On copyright, many firms expressed a desire for access to open data for AI training, arguing that copyright, privacy, and patent laws can get in the way of training models. For example, Google suggested that strong fair use and text/data mining exceptions are vital for AI to learn from public material without falling into messy negotiations any time a model needs to be trained. Similarly, OpenAI argues that U.S. copyright law, particularly the fair use doctrine, supports AI innovation, and thus recommends that the federal government should reinforce this advantage by shaping global copyright discussions, monitoring data access for U.S. firms, expanding government data availability, and defending pro-innovation principles domestically to secure economic and national security benefits. Creators such as the News Media Alliance, however, argued for free market licensing to link content creators and AI developers fairly to foster and maintain U.S. tech development and innovation. Current IP laws, it argued, are solid: Courts can handle AI disputes as they’ve done with past tech shifts. But creators are worried AI is using protected content without permission, risking damage before legal fixes kick in, and call for more collaboration to balance innovation and IP rights.
For patents, many creator groups argue that the administration should improve and maintain the U.S. Patent Office and its Inter Partes Review process – a legal procedure that challenges the validity of a patent – to allow efficient review of AI patents granted in error. With AI patents increasing worldwide and only limited time available for reviewing those patent applications, companies such as Google are pushing for a more reliable patent system to keep U.S. AI development evolving without delays created by bad patents. Specifically, some argue that the Trump Administration should strengthen the IPR program by ensuring its efficiency to reduce arbitrary rejections and allow businesses to quickly challenge flawed AI patents, especially those held by foreign players. This would ensure that U.S. companies aren’t stalled by legal battles or forced to waste R&D resources defending against bad faith claims.
Conclusion
Moving forward, the Trump Administration says it will draw on these comments and themes to create an AI Action Plan that identifies key policy steps to strengthen and grow U.S. leadership in AI. While the administration has a wide range of comments to work through, the key themes outlined in this paper should be a starting point. If done well, the AI Action Plan should help to reinforce and expand the United States’ continued leadership in AI.





