Insight
May 21, 2025
Regulatory Approaches to AI: Balancing Innovation and Oversight
Executive Summary
- As part of the reconciliation bill, the House Energy and Commerce Committee advanced a controversial 10-year moratorium on state-level artificial intelligence (AI) regulation, aiming to prevent a patchwork of laws across 50 states that could complicate compliance and slow AI innovation.
- If the moratorium makes it into law, it will put pressure on Congress to craft a national strategy for AI that addresses risks without stifling innovation; it will have three main regulatory approaches to consider – light-touch models that prioritize flexibility and speed, national standards that aim for uniformity, and risk-based frameworks that scale oversight based on harm.
- This insight walks through the current state of AI regulation, the pros and cons of key AI regulatory frameworks, and what policymakers should consider if they decide to craft a national strategy governing AI use.
Introduction
As part of the reconciliation bill, the House Energy and Commerce Committee advanced 10-year moratorium that would block states from creating or enforcing any laws that regulate AI systems, models, or automated decision-making tools. The initiative aims to prevent a patchwork of laws across 50 states that could complicate compliance and slow national innovation. While the moratorium advanced in the House, its path in the Senate is uncertain. While no one wants a fragmented regulatory patchwork, achieving a well-balanced AI national framework won’t be easy. If passed, the moratorium would put pressure on Congress to craft a national strategy for AI. The recent Senate hearing, “Winning the AI Race: Strengthening U.S. Capabilities in Computing and Innovation,” highlighted key challenges, including industries pushing for clearer rules, worries about overregulation, and concerns about AI misuse. Yet there is no consensus on what federal policy should look like.
Overall, three main paths are under consideration: a light-touch approach that avoids early intervention and supports innovation, a standards-based model to ensure consistency and codifies best practices, and risk-based regulation that targets oversight where it’s most needed, each with trade-offs. This insight walks through the current state of AI regulation, the pros and cons of key AI regulatory frameworks, and what policymakers should consider if they decide to craft a national strategy governing AI use.
Current State of AI Legislation
The United States is navigating a rapidly evolving AI landscape without a clear framework in place. In this context, states have begun to take the lead. In the past two years alone, states have introduced more than 500 AI-related bills. The state-level efforts cover varied concerns, focusing on government use of AI, preventing deceptive content in elections, and education initiatives that seek to leverage AI for both public and private-sector advancement. This growing wave of state-level activity reflects rising concerns that were echoed in the recent Senate hearing on AI competition with witnesses – including leaders from OpenAI, AMD, CoreWeave, and Microsoft – warning that 50 different sets of AI rules could hinder progress and undermine U.S. leadership.
In response to this growing regulatory patchwork, the House Energy and Commerce Committee, as part of the reconciliation bill, proposed a 10-year moratorium on state-level AI regulations that aims to centralize AI governance by preventing states from enacting or enforcing their own AI-related laws. Supporters suggest this would give Congress and federal agencies the room needed to develop a cohesive national strategy. Critics, however, see it as a move that could block states from addressing their urgent needs and give too much power to the AI industry. While several Senate Republicans have voiced support, others are more skeptical. Yet, whether through standalone legislation or as part of a broader federal AI package, the idea of preempting state laws is now firmly in the spotlight.
If a moratorium is passed and states are forbidden from passing AI-specific bills, Congress will likely face additional pressure to step in and regulate the field. Indeed, in the Senate hearing, there was some discussion of what a national AI framework might look like, with particular emphasis on light-touch principles, national standards, and risk-based regulation.
Approaches To AI Regulation
Light Touch
In the context of AI, a light-touch regulatory approach means setting minimal government restrictions to allow innovation to thrive, especially during the initial stages of development. The idea is to avoid overregulating a fast-moving technology in which the United States is trying to lead. Proponents of a light-touch approach argue that it would encourage experimentation, investment, and rapid deployment, while giving federal agencies time to better understand emerging risks. Several senators are backing this approach. Senator Ted Cruz (R-TX) for example, is currently developing a light-touch legislative framework and has floated a federal “regulatory sandbox” to allow AI companies to innovate without fear of early compliance burdens. The concern, however, is that without clear guardrails for specific harms, AI could be misused, leading to discrimination, deepfakes, or privacy violations before safeguards are in place.
National Standards
National standards would establish a unified set of federal guidelines that provide consistent rules or codifying best practices for how AI systems are developed, deployed, and governed across the United States. Proponents of this approach argue that national standards can provide clarity for developers, protect consumers, and help avoid inconsistent rules across states. Yet critics caution that rigid federal standards might stifle innovation and fail to address specific local concerns.
Several senators have introduced proposals to establish national AI standards. For instance, in the last Congress, Senators John Thune (R-SD) and Amy Klobuchar (D-MN) co-sponsored bipartisan legislation that would require the National Institute of Standards and Technology and the Department of Commerce to develop guidelines and a certification process for critical-impact AI systems. But while standards for AI, such as technical guidelines and best practices, could be a positive step to mitigate harms associated with bias and discrimination, deepfakes and misinformation, and security and privacy risks, they risk becoming rigid rulebooks that could stifle innovation if they are not carefully designed to evolve with a changing technology such as AI.
Risk-based Frameworks
Rather than establishing a national standard, Congress could explore a risk-based strategy to address high-risk use cases that will still allow the development of AI more generally. A risk-based approach to AI regulation would sort AI systems by how much danger they could pose to people and society. High-risk systems, such as those used in hiring, lending, or criminal justice, can have a big impact on people’s lives as these systems might decide who gets a job, a loan, or how long someone stays in jail. If they’re biased or flawed, they could lead to unlawful differential treatment and the damage could be serious and hard to undo. This approach would ensure that the regulatory response is proportional to the potential harm, allowing for stricter oversight where necessary while avoiding overregulation in lower-risk areas.
During the Senate hearing, experts discussed the importance of distinguishing between high-risk and low-risk AI applications. They emphasized that while high-risk systems require robust safeguards to prevent harm, low-risk applications might benefit from a more flexible regulatory approach. This perspective aligns with the principles of risk-based regulation, advocating for tailored oversight that addresses the specific risks associated with different AI systems.
State Role Moving Forward
Finally, even with a moratorium on state AI regulation, state governments would still have an important role to play. Preemption can help create national consistency, but it would not erase the legitimate responsibilities that states, and local governments have always held. For example, states will still decide how to implement and govern AI within their own operations. Moreover, states already have a range of laws they can enforce when AI is involved, especially around issues such as fraud, civil rights, and consumer protection. A moratorium would not block states from using those tools. The focus, instead, could shift toward applying current legal frameworks effectively, rather than creating new and potentially conflicting regulations.
Conclusion
What most AI regulatory efforts get wrong is the tendency to overreach, trying to cover everything at once with broad, premature, unclear language, and layers of compliance. The path forward should prioritize clarity, flexibility, and a focus on real risks. To address the challenges of overreach, a temporary moratorium could give Congress, agencies, and developers the room needed to evaluate what truly requires regulation, what can be addressed through existing laws, and where gaps remain. During this period, the United States could consider adopting a light-touch, risk-based framework, that would apply stronger safeguards where the potential for harm is high while allowing low-risk applications develop, ensuring both innovation and safety.





