Insight
March 5, 2025
Sustaining U.S. AI Leadership: Moving Beyond Restrictions
Executive Summary:
- The release of the Chinese open large language model (LLM) DeepSeek-R1 demonstrates that foreign countries can adapt to create high-performance artificial intelligence (AI) models by optimizing available chips and leveraging open-source AI.
- This development has revealed that American efforts to sustain leadership in the field of AI – including the use of export controls on AI chips – have fallen short of their objectives, as DeepSeek was trained using open-source models and not export-controlled AI chips, thus circumventing U.S. restrictions.
- Policymakers should reassess whether restrictive measures like export controls effectively sustain U.S. AI leadership or if a more holistic strategy – one that strengthens global competitiveness while supporting domestic AI through research, compute power, talent, and data – is needed.
Introduction
The release of the Chinese open large language model (LLM) DeepSeek-R1 in January has redefined the artificial intelligence (AI) race. Until now, AI progress has been driven by big models, and great compute power driven by elite chips, with American companies such as OpenAI, DeepMind, and Anthropic investing billions into Nvidia’s world leading chips and supercomputers. But what took the world by surprise was that a Chinese startup founded in 2023 came up with an AI model, DeepSeek, that rivals other advanced models, including ChatGPT, by leveraging open-source AI and optimizing available hardware.
This development has also shaken the foundations of U.S. leadership in the creation of AI systems. To maintain the American advantage in AI, since 2018, the U.S. government has used export controls as a tool to restrict mainly China’s access to the AI chips with sufficient compute power to train advanced AI systems. More recently, the Biden Administration introduced the Framework for Artificial Intelligence Diffusion, which expands chip restrictions to additional countries and includes closed AI model weights, the parameters that define how an AI model learns. Using a three-tiered system, the framework put sets of constraints on more than 150 countries belonging to two tiers, while allowing only one tier of 18 allied countries to easily access these technologies.
In considering how to maintain leadership in the field of AI, the United States has wielded the tool of AI chip export controls. Yet while U.S. export controls are premised on the idea that restricting superior chips can hinder foreign AI advancement, DeepSeek’s release shows that AI developments are not solely dependent on access to the most powerful chips, as the LLM was produced using open-source models and not with access to the technologies whose export the United States has attempted to restrict. In other words, the main factors to sustain modern AI development are not only compute power, but also research, talent, and data. While export restrictions may provide short-term advantages by limiting access to one of the factors (compute power), a holistic approach should consider the critical role of all these variables in accelerating AI progress for U.S. companies domestically and maintaining its global market.
It may be tempting for policymakers to consider, then, to also restrict access to open-source models, thus closing the loophole that allowed DeepSeek to circumvent U.S. export controls on advanced chips. This would be a mistake. While open-source AI has risks, the technology also offers benefits such as lower barriers to use, enabling modification, cost-saving optimization, and collaboration key for AI development. Instead, policymakers should reassess whether the established export controls are effective or if a holistic strategy is necessary, one that also recognizes the critical and interrelated nature of all the factors needed for fostering a dynamic AI development that keeps pace with evolving technological advancements.
Export Controls at Crossroads
To maintain the U.S. advantage in AI, the federal government has used export controls as a tool to restrict access to AI chips to other countries, particularly China, based on the assumption that this approach may help maintain the technological edge the United States holds over other countries.
Increased global tension over the possession and distribution of chips started approximately in 2018 when the Trump administration cut off Chinese chipmaker Fujian Jinhua Integrated Circuit from its U.S. suppliers. Since then, the United States has been implementing restrictions on the export of chips including in 2022 the China’s restriction to access the NVIDIA’s A100 and H100 chips are high-performance GPUs optimized for AI, and other high-performance computing (HPC) applications. More recently, the Biden Administration introduced the Framework for Artificial Intelligence Diffusion, which outlines the U.S. position on the global diffusion of AI through the levying of AI export controls on more countries and introduces controls on closed AI model weights, the parameters that define how an AI model learns. The framework establishes three tiers of countries with different levels of access: Tier 1 countries (the United States and 18 key partners) have no restrictions; Tier 2 countries (150 other nations) can receive exports only through companies that have joined the data center authorization program or have obtained individual licenses; and Tier 3 countries (arms-embargoed countries) remain under strict export restrictions.
While American chip firms remain by far the dominant players in the export of AI-capable hardware, the Trump Administration should consider that imposing widespread restrictions on the sale of chips and AI models could stifle innovation and limit opportunities for U.S. firms in global markets. First, U.S. export controls severely limit AI companies’ ability to operate in Tier 2 countries – which encompass around 150 countries and include allies such as Portugal – restricting both AI models and the necessary hardware. This added bureaucracy increases costs and delays exports, discouraging global customers who may turn to alternative suppliers with fewer barriers, such as China. With a reduced global market, less revenue would harm R&D, thus slowing innovation, and regulatory uncertainty would further deter investment, pushing capital toward more stable markets. Nvidia has voiced concerns on the overreach of the new rule, which it argues would impose bureaucratic control over the design and global market of America’s leading technologies.
Second, most of the Tier 2 countries will be, in the next years, making important decisions about AI and infrastructure, and by adding additional constraints on American firms to offer these countries accessible and customizable models, federal restrictions risk weakening U.S. alliances and influence in the developing world, giving China an opportunity to dominate these markets and shape their technological future.
Finally, by relying on fixed, rigid categories that consider countries’ chip accessibility in tiers, the United States often fails to adapt quickly enough to the fast-paced advancements in AI. As a result, overly stringent controls, particularly those on Tier 2 countries, could harm U.S. companies by restricting their global market access and discouraging innovation.
Open-source Models: Liability or Opportunity?
DeepSeek’s release has placed significant attention on the use of open-source models – AI systems with their components available for further study, use, modification, and sharing. Proponents of open-source models argue that they promote AI development, reduce market concentration, and facilitate collaboration, while opponents argue they pose safety risks and put some AI companies at a competitive disadvantage. As a result, regulators are contemplating policies that could hamper their growth.
DeepSeek was launched as an open-source model and is one of the most unrestricted large-scale AI models to date. Additionally, it leveraged two open-source models, Alibaba’s Qwen and Meta’s Llama, to enhance its R1 model and enable its reasoning capabilities. There are some key lessons from this development.
First, DeepSeek was created using open models using a technique known as model distillation that transfers knowledge from a large “teacher” model to a smaller “student” model. This technique, distillation, reduces computational costs and enhances the efficiency of AI systems, making them suitable for resource-constrained environments such as those with limited computing power. Open-source models facilitate the use of model distillation by making their architecture and training data accessible. This openness allows developers, researchers, and companies to freely access and modify the models, creating smaller, more efficient versions for diverse applications.
Second, DeepSeek, an open Chinese AI model, follows the same unrestricted approach as many other open-source AI projects worldwide. Anyone – from independent researchers to private companies – can refine, customize, and deploy the model without seeking permission or entering into licensing agreements. By eliminating these barriers, DeepSeek provides U.S. and international startups, researchers, and developers with free access to leverage this openly available model, fostering global AI research and innovation.
Sustained U.S. leadership in AI depends to some extent on how well the federal government learns from these lessons. Some may argue that more restrictions should be imposed on open models to hinder their use by U.S. adversaries and reduce risks, but hampering open-source AI development could also lead to the United States to miss out on the benefits of open-source models, such as lower barriers to use, enabling modification, and cost-saving optimization. While acknowledging that some risks exist, efforts to limit open-source AI may ultimately cause the very harm critics seek to avoid.
Looking Forward
To hinder AI development in other countries the U.S. has considered two main tools: increasing export controls and restricting open-source models, both with varied implications for the development of AI models not only internationally but also domestically.
DeepSeek’s success reveals that an overly restrictive global approach that lacks flexibility and adds limitations to the AI chips and models distribution may provide short-term advantages, but their long-term effectiveness in sustaining U.S. AI leadership is not guaranteed and this policy could even hamper U.S. AI development domestically, as well as the country’s market dominance internationally.
Thus, winning the competition for the global AI market and infrastructure while sustaining domestic AI will require a holistic strategy that considers the pivotal and interrelated role of the main factors to sustain modern AI development: research, compute power, talent, and data. First, robust AI research depends on cultivating top talent across disciplines such as computer science and machine learning, making it essential to foster an environment that attracts and develops expertise. Second, AI development depends on access to high-performance computing and reliable data for training large-scale models, and thus, sustained investment in cutting-edge infrastructure, compute power, and access to big data sets will be essential in the coming years. Finally, AI research requires strong collaboration between academia, industry, and global partners. Universities and startups should work alongside industry leaders to bridge the gap between fundamental research and real-world applications, leveraging shared expertise and resources and open source facilitates this collaboration by reducing barriers to entry. While export restrictions may provide short-term advantages by limiting access to one of the inputs (compute power), ultimately, an effective strategy must incorporate lessons from recent years while carefully considering the interconnection between all critical factors that drive AI development.





