The R Street Institute’s Adam Thierer testified last week before a panel of the House Committee on Oversight and Accountability at a hearing titled, “White House Overreach on AI.” This hearing is part of the Committee’s exploration of artificial intelligence (AI) and AI regulation on the topics of innovation, misinformation, and government overreach.

Last October, President Joe Biden issued a sweeping executive order (EO) that purported to establish new standards for AI safety and security, protect Americans’ privacy, advance equity and civil rights, and promote innovation and competition. In an October analysis of that EO, Thierer explained that “unilateral and heavy-handed administrative meddling in AI markets could undermine America’s global competitiveness—and even the nation’s geopolitical security—if taken too far.” This focus on innovation became one of the main subjects of the hearing.

In his testimony, Thierer outlined a vision for congressional legislation that would spur development in the AI industry, identifying four key pillars around which a framework ought to be built: flexibility to account for rapidly changing technologies, building on existing government powers rather than defining new ones, preempting state and local AI laws, and providing entrepreneurs the green light to develop new AI services under the principle of innocence until proven guilty.

Many of the members present expressed concerns about how the EO abuses the Defense Production Act. As Thierer explained, “the order flips the Defense Production Act on its head and converts a 1950s law meant to encourage production, into an expansive regulatory edict intended to curtail some forms of algorithmic innovation.” As the hearing progressed, two additional concerns emerged from members on the panel: the EO’s requirement that firms share the safety testing results of their AI models, as well as other sensitive, internal company data, with government agencies, and the idea that the president usurped Congress’s authority by implementing sweeping policy change without any express authorization from the legislative branch.

With regard to data safety and security, Subcommittee Chairwoman Nancy Mace (R-S.C.) asked whether we can trust the Department of Commerce to prevent sensitive data from falling into the hands of foreign adversaries given its failure to prevent hackers from accessing Secretary Gina Raimondo’s email. The majority’s witnesses agreed that we cannot. Mace proceeded to ask if the government should be trusted as the sole arbiter of what constitutes disinformation and if the government ought to require testing data to be submitted to relevant agencies, which was met with similar skepticism by Thierer and the other majority witnesses.

On the topic of White House overreach, Rep. Eric Burlison (R-Mo.) asked Thierer whether this “executive order has gone too far?” Thierer replied by saying “it very well could” and noted that:

…just this week Saudi Arabia announced historic investment in its AI capacity, something like $40 billion. Last September, the government of the [United Arab Emirates] came out with an open-source AI model that is 2.5 times larger than America’s largest open-source AI model. So, it’s not just China we face off against, it’s all sorts of countries … If this executive order shoots ourselves in the foot as a nation and holds back our innovative capacity, that has massive ramifications for our competitiveness and our geopolitical security.

Rep. Burlison echoed Thierer’s concern, likening it to the health care information technology space where he personally witnessed the impact of overregulation and regulatory capture stifle innovation. Toward the end of the hearing, Rep. Stephen Lynch (D-Mass.) asked Thierer about potential solutions to fighting AI-generated misinformation. Thierer noted proposals to increase AI literacy and education, which Rep. Lynch recognized as an important step as he pressed other witnesses for technological solutions to the problem.

In sum, most of the panel’s members and Thierer’s co-witnesses agreed with the importance of pursuing federal policies that promote, rather than hinder, American innovation and geopolitical security; that prevent federal overreach; and that avoid the perils of sharing sensitive data with government agencies in the manner outlined in the EO. AI—and specifically generative AI—represents a burgeoning field of powerful technology. As firms explore its development and applications, it is vitally important that the federal government allow proper space for innovation. As Thierer noted in his opening remarks, “Had a Chinese operator launched a major generative AI model first, it would have been a ‘Sputnik moment’ for America.” Luckily, an American firm beat China to the punch.