I recently published a piece discussing three artificial intelligence (AI) hearings on Capitol Hill where I testified and learned some important lessons about the state of AI policy in Congress today. Building on those experiences, I elaborate here on how the AI policy debate has evolved over the past two years and where things might be heading as federal lawmakers contemplate AI legislation.

The AI Vibe Shift

These hearings highlighted some of the new policy priorities and continuing tensions around AI policy in Congress today:

The mood and priorities in Congress were different just two years ago. The AI policy debate was far more fear-based and included sweeping calls for broad-based regulation. Consider the contrast between two Senate hearings held almost exactly two years apart.

On May 16, 2023, the U.S. Senate Judiciary Committee held a hearing titled “Oversight of A.I.: Rules for Artificial Intelligence,” where senators and witnesses outlined various proposals to regulate AI aggressively through new federal regulatory bureaucracies and preemptive licensing and auditing schemes. There was also considerable openness to the idea of aligning U.S. law with more heavy-handed rules from the European Union (EU) or other global AI regulatory regimes, perhaps through a new international regulatory body.

Flash forward to a Senate Commerce Committee hearing held May 8, 2025, “Winning the AI Race: Strengthening U.S. Capabilities in Computing and Innovation.” The title alone reflects the new mood in Congress about AI policy. The hearing announcement from Committee Chair Ted Cruz (R-Texas) said America needed to “find ways to remove restraints on the AI supply chain and unleash American dominance in machine learning and next-generation computing.” At the hearing, Sen. Cruz and many other lawmakers made it clear that beating China in the race for global AI supremacy was a top priority. He and other senators also highlighted the burdens associated with European tech regulations as well as the growing patchwork of state regulations. This was starkly different from the rhetoric and proposals of the 2023 Senate Judiciary Committee hearing.

Representatives of leading technology companies, including OpenAI CEO Sam Altman, have also changed their approach. During the 2023 hearing, The New York Times stated that Altman “implored lawmakers to regulate artificial intelligence” and “had a friendly audience in the members of the subcommittee”; however, when it came to the 2025 hearing, Tech Policy Press noted that Altman displayed “a notable shift in posture.” For example, Altman said proposals to require prior regulatory approval for AI innovations would be “disastrous” and railed against a state-by-state AI regulatory patchwork, saying, “I think it would be quite bad. I think it is very difficult to imagine us figuring out how to comply with 50 different sets of regulation.”

New White House Approach Changed Things

Part of the reason for the shift by Altman and other tech leaders likely relates to the broader shift in tone and policy. Many of the Biden administration’s AI policy documents spoke in ominous tones, claiming algorithmic systems are “unsafe, ineffective, or biased,” and “threaten the rights of the American public.” Former President Joe Biden also signed a historically long AI executive order (EO).

When President Donald J. Trump took back the White House, he immediately repealed Biden’s EO and signed a new order adopting a more optimistic approach: “to sustain and enhance America’s global AI dominance in order to promote human flourishing, economic competitiveness, and national security.” Vice President JD Vance delivered a major speech on AI policy in February, arguing that “excessive regulation of the AI sector could kill a transformative industry just as it’s taking off, and we’ll make every effort to encourage pro-growth AI policies.”

Meanwhile, the Trump administration has taken other actions to reverse Biden’s approach to AI, including converting the U.S. AI Safety Institute (AISI) into the Center for AI Standards and Innovation (CAISI), which will focus more on promoting AI innovation. These recent actions have likely encouraged developers to change their approach to AI policy.

Congress’s Role in the AI Policy Debate

There are limits to how much the Trump administration can do through EOs and independent actions, however. While Congress will eventually need to address some major AI policy issues, the landscape remains highly complex and fluid, making it difficult to forecast even short-term congressional actions.

Congress has been in this position before. Efforts to formulate broad-based AI legislation are haunted by the ghosts of past congressional failures to enact comprehensive privacy legislation. Despite bipartisan support for some sort of federal action on privacy, lawmakers disagreed on the scope of state preemption, liability issues, and other matters. These same issues will make bipartisan action on AI legislation challenging.

Last December’s Bipartisan House Task Force Report on Artificial Intelligence searched for common ground on AI policy; however, it often spoke in broad generalities and left important details for later. Unfortunately, digital tech policy is more partisan now than during the mid-1990s, when Congress and the Clinton administration worked together to enact the Telecommunications Act of 1996 and a light-touch policy framework for the internet and online commerce. Finding legislative consensus will be harder today—not only because of partisanship in tech policy, but also because AI affects virtually every facet of the economy and society.

Regardless, there are at least three major issues to address in order for comprehensive AI legislation to happen.

Preemption

Over 1,000 AI-related bills were introduced in the first four months of 2025, most of them state proposals. This raised concerns among many federal lawmakers about how a patchwork of confusing and costly state and local AI policies could undermine national AI priorities by discouraging nationwide business formation, investment, and consumer choice. It could also undermine America’s global competitiveness more broadly, making it harder for U.S. firms to remain competitive with China.

This is why Republicans proposed a 10-year moratorium on state AI regulations in the current budget bill. While that moratorium has proven highly contentious, Congress must authorize some degree of state preemption one way or another.

At a minimum, state governments should not regulate large frontier AI models in an extraterritorial fashion as it clearly affects interstate commerce. New York lawmakers recently passed the Responsible AI Safety and Education (RAISE) Act, which would impose a variety of regulatory obligations on AI developers and threaten them with state liability for failure to comply. The bill is currently awaiting a signature or veto from Gov. Kathy Hochul. A similar bill passed in California last year before being vetoed by Gov. Gavin Newsom.

Meanwhile, state “algorithmic discrimination” laws are moving—including one already passed in Colorado—that will have a similar effect on the interstate AI marketplace, necessitating some degree of federal preemption or a moratorium on such enactments.

CAISI authority

Comprehensive AI legislation would allow congressional lawmakers to clarify the formal powers of the new CAISI. In the previous session of Congress, a heated debate took place over the question of what (if any) regulatory powers the AISI should have. Now that the Trump administration has converted the AISI into the CAISI and refocused it on promoting rather than restricting AI, it might be easier for Congress to endorse the agency as part of a new AI law.

Congress has not given many regulatory powers to Department of Commerce agencies like the National Telecommunications and Information Administration (NTIA) or the National Institute of Standards and Technology (NIST), which controls the new CAISI. Federal lawmakers must decide the extent of CAISI authority or whether NIST or the NTIA will be granted any formal regulatory authority.

Congress should also clarify the role of the chief AI officer at each federal agency as created by the Biden administration and retained by the Trump administration. Finally, Congress could address other existing agency rules affecting AI development and instruct agencies to examine how to reform them while ensuring adequate resources to address legitimate risks in specific sectors.

Liability and whistleblower protection

While Congress has a variety of AI governance options to choose from, almost all of them must eventually address what sort of liability AI developers and deployers might incur for misuse of their systems. Congress could opt to keep AI policy focused on transparency and lean on large model developers in particular to work with the CAISI to evaluate system capabilities and vulnerabilities. Several leading bills from the previous session would have encouraged some degree of transparency around—or adherence to—NIST-developed best practices for AI risk management. Some states have already proposed such rules, which should only be imposed at the federal level.

Lawmakers could also focus on AI transparency requirements or best practices for specific sectors or professions. Sen. Cynthia Lummis (R-Wyo.) recently introduced the “Responsible Innovation and Safe Expertise (RISE) Act of 2025,” which would offer liability safe harbors to certain “learned professionals”—such as physicians, attorneys, engineers, and financial advisors—if they release key design specifications about their AI systems. This sort of quid pro quo would encourage greater developer transparency in exchange for conditional immunity from civil liability. It would also preempt state laws that sought to impose a patchwork of liability regimes on such developers.

Whistleblower protections could also become part of a federal AI law. A new “AI Whistleblower Protection Act,” which would provide whistleblower protections to current and former employees of AI companies who disclose information about system dangers or failures, has garnered bipartisan support. Such a law would need to ensure a balance between trade secrets and system safety.

Conclusion

While many lawmakers desire broad-based action on AI policy, narrowly focused measures have the best chance of advancing in the short term. This is another bitter lesson of Congress’s recent experience with privacy legislation: Attempting to solve many things at once in one large bill creates many problems.

There have been calls to address concerns about AI’s implications for copyright, child safety, elections, research and development, and many other matters. Regardless of their individual merit, should all of these proposals be included in a bloated AI bill, the resulting measure would likely fail under its own weight.

Moreover, trying to anticipate and preemptively address every perceived AI-related problem is unwise. AI innovators should not be treated as guilty until proven innocent of hypothetical future harms—that is the EU model of regulation, which has undermined continental innovation and investment across the Atlantic.

To succeed in the current moment, AI policy must be incremental and fit for purpose. Congress can use an approach built on best practices, transparency, and targeted liability to strike a sensible balance between innovation and safety.

Follow our artificial intelligence policy work.