Political interest in artificial intelligence (AI) has exploded over the past year, and policymakers across the globe continue to float a variety of ideas for AI system regulation. These proposals range from narrowly targeted measures to sweeping regulatory requirements, such as new AI-specific agencies and broad-based algorithmic licensing requirements. In fact, there are so many competing AI-related policy proposals and hearings at the federal level in the United States that it is difficult to catalog them all. An earlier R Street essay discussed the current prospects for AI bills and argued that the combined volume of these efforts—and the extreme regulatory proposals found in many of them—will likely make it difficult for Congress to advance broad-based legislation in the short term.

A new bill sponsored by Sens. John Thune (R-S.D.) and Amy Klobuchar (D-Minn.), the Artificial Intelligence Research, Innovation, and Accountability Act of 2023 (AIRIA), seeks to break this logjam with a novel approach to AI governance that could serve as the basis for workable compromise. The measure already has bipartisan cosponsorship from several members of the Senate Committee on Commerce, Science, and Transportation, including Sens. Roger Wicker (R-Miss.), John Hickenlooper (D-Colo.), Shelley Moore Capito (R-W.Va.) and Ben Ray Luján (D-N.M.). This broad support among members of one of the most important committees for AI policy makes the proposal more significant.

What also sets AIRIA apart from most other legislative proposals is its risk-based approach to AI policy, which is rooted in existing governance structures and best practices. In essence, AIRIA melds self-regulatory “soft law” mechanisms (i.e., best practices and governance steps more voluntary in nature) along with some limited “hard law” regulatory enforcement to create AI safety standards.

A Response to Extreme Efforts from Congress and the White House

Many AI proposals start from the premise that regulation must be reinvented whole-cloth and imposed in a top-down, highly technocratic fashion. In Congress, much of the debate surrounding AI systems has focused on apocalyptic scenarios and worst-case thinking, leading to calls for sweeping regulation. The most extreme of these proposals comes from Sens. Richard Blumenthal (D-Conn.) and Josh Hawley (R-Mo.), who serve as chair and ranking member of the Senate Judiciary Subcommittee on Privacy, Technology, and the Law. In September, they floated a wide-ranging regulatory framework that proposed a new AI-specific regulatory agency, required the licensing of high-powered AI systems, expanded AI developer liability, and established assorted transparency requirements.

Compared to the general tenor of current Capitol Hill AI discussions, the Thune-Klobuchar proposal represents a more reasonable starting point for federal AI policy. Far-reaching regulatory proposals such as those contained in the Blumenthal-Hawley bill are unlikely to generate widespread support, especially because many algorithmic innovators will oppose them.

The Thune-Klobuchar bill also takes on added importance in the wake of the Biden administration’s Oct. 30 release of a 111-page Executive Order (EO) on “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” Earlier R Street analysis explained how the Biden EO represents an everything-and-the-kitchen-sink approach to AI policy that opens the door to bureaucratic micromanagement of algorithmic systems. The EO empowers federal agencies to be far more aggressive in the oversight of AI markets in their respective fields. To the extent those agencies pursue more AI regulation unilaterally—and without explicit congressional authorization—it represents a potential sea change in the nation’s approach to digital technology markets.

A Hybrid Approach Rooted in Existing Best Practices

By contrast, the Thune-Klobuchar approach relies on a more incremental, bottom-up, risk-based approach to AI governance. AIRIA would specifically build on the multi-stakeholder approach to AI oversight developed by the National Institute of Standards and Technology (NIST). Working with a diverse array of groups and experts over the past few years, NIST created an AI Risk Management Framework (AI RMF) to formulate consensus-driven best practices for algorithmic development and solutions to various algorithmic concerns. The goal of the AI RMF is to create more trustworthy AI systems over time. NIST has developed similar frameworks for cybersecurity and other technical issues in which consensus-based standards are important to balance innovation and safety goals.

The AI RMF is not a regulatory program, and NIST does not possess regulatory powers to enforce best practices as binding requirements on algorithmic developers. Rather, the practices outlined in the framework are meant to guide developer decisions and policy considerations in a more flexible, iterative, consensus-driven and context-specific fashion. The AI RMF repeatedly stresses the importance of a so-called “test, evaluation, verification, and validation” (TEVV) process throughout the AI product lifecycle.

NIST created an accompanying “roadmap” to ensure the framework evolves to meet new developments and challenges over time. A crucial feature of this roadmap is that NIST will work with experts and stakeholders to constantly align algorithmic best practices with widely accepted international standards developed through ongoing negotiations with standards bodies.

These details are important because AIRIA builds upon the AI RMF as the foundation of AI policy. AIRIA instructs NIST to carry out research to facilitate the development of self-certification standards, risk assessments and testing processes specified by the U.S. Department of Commerce.

The bill distinguishes between “critical-impact AI systems”—those that implicate critical infrastructure, criminal justice, national security or individuals’ biometric data—versus “high-impact” AI systems, which are “developed with the intended purpose of making decisions that have a legal or similarly significant effect on the access of an individual to housing, employment, credit, education, healthcare, or insurance in a manner that poses a significant risk to rights afforded under the Constitution of the United States or safety.”

AIRIA would require critical-impact system creators to conduct a risk-management assessment and make it publicly available to the Department of Commerce 30 days before release and to provide updated risk assessments going forward. The bill also lays out standards by which developers of such systems should use a TEVV process to self-certify adherence to various best practices. As noted, the TEVV approach has been pushed by NIST and endorsed by the U.S. Department of Defense for defense-related AI development but would be applied more broadly under AIRIA.

High-impact systems are subjected to somewhat lighter-touch oversight. AIRIA instructs the Department of Commerce to work with agencies to “develop sector-specific recommendations for individual Federal agencies to conduct oversight” of high-impact artificial intelligence systems “to improve the safe and responsible use of such systems.” High-impact system developers must submit transparency reports to the Department of Commerce “describing the design and safety plans for the artificial intelligence system” before product launch and every year thereafter. Providers are required to specify how such AI systems will be used, the data that power them, the potential impacts, and metrics for gauging these things. The bill recommends that providers follow best practices outlined in the NIST AI RMF when doing so. If developers do not comply with this framework, AIRIA would authorize the Department of Commerce to impose fines on the provider or even prohibit them from deploying their systems.

Through these provisions, AIRIA seeks to combine evolving best practices for AI with some enforcement oversight. It is not a strict AI licensing regime that demands a new agency or formal licensing, however. Instead, it builds on existing multi-stakeholder processes and best practices and authorizes limited federal oversight of self-certification for developers of some systems. The bill makes a good-faith effort to define those systems, but there will certainly be ongoing controversy about who or what qualifies as a critical-impact system versus a high-impact system.

Additionally, the bill requires internet platforms to clearly indicate whether they use generative AI to create content for users. It also mandates a review of how the federal government currently uses algorithmic systems and what barriers exist to more widespread adoption and requires the Department of Commerce to create a working group dedicated to developing responsible education efforts for AI systems. Less controversial than the bill’s other stipulations, these provisions are likely to win broad support.

AIRIA Still Creates Some Regulatory Burdens

While more flexible than other pending legislative proposals or Biden administration efforts, AIRIA still represents an expansion of federal AI regulation. An analyst with the Center for Data Innovation at the Information Technology and Innovation Foundation argues that AIRIA “is jumping the gun” by “[r]ushing to establish AI standards without a clear understanding of the nuanced requirements in different sectors risks creating a framework that does not effectively address diverse contexts.”

She specifically worries that AIRIA’s approach to critical-impact AI systems “takes a solution for AI accountability used in defense contexts and tries to shoehorn it into nondefense context” and will result in an “incredibly broad” scope of regulatory coverage, which could be unworkable and overly burdensome for algorithmic innovation in important fields. More generally, AIRIA would create new paperwork burdens for AI developers as well as potential compliance headaches that could slow the pace of algorithmic innovation.

These concerns could be addressed by narrowing the scope of what constitutes a critical-impact or high-impact AI system under AIRIA or by encouraging the Department of Commerce to use a more flexible soft-law approach to addressing these issues through continued refinement of voluntary best practices. Better yet, lawmakers should appreciate the benefits of continuing the same sectoral approach that has long guided tech policy, which relies on the many existing agency and court-based remedies that can address potential AI harms as they develop in various contexts. Before AI came into focus, computers and consumer electronics were the major general-purpose technologies that revolutionized almost every sector of the economy. Policymakers did not respond by creating a Federal Computer Commission; rather, they relied on the extensive set of agencies, laws and common law standards already on the books and adapted them to fit new challenges.

There are other unresolved questions about AIRIA. For example, it is unclear how the bill would address open-source AI systems as they evolve over time. These systems are highly decentralized and change rapidly, making it difficult to know who would be responsible for complying with AIRIA’s various certification or transparency requirements. Another question left unanswered is how the bill would address the rising tide of state and local AI regulation. Some industry groups are already advocating for federal preemption of the growing patchwork of AI regulations, but AIRIA does not.

The more general concern about the law lies in how it might be interpreted later by an agency with limited experience in regulating fast-moving tech markets. If AIRIA’s provisions were read too broadly or enforced too aggressively, the law could open the door to the Department of Commerce becoming a more aggressive de facto licensing regulator for algorithmic systems, piling on more and more certification requirements or threatening fines with ambiguous provisions. Congress should clarify that this is not the intent of the law if the bill moves forward. The focus should remain on optimizing widely accepted best practices while maintaining a flexible approach to their creation, application and oversight.

As an earlier R Street report argued, the most important AI governance principle should be that regulation focus on algorithmic outcomes rather than inputs or the initial processes used to create new systems. “What really matters is that AI and robotic technologies perform as they are supposed to and do so in a generally safe manner,” the report noted. Thus, the ideal AI governance regime zeroes in on outcomes and performance—and relies on actual evidence of harm—to ensure that algorithmic innovation is not overly burdened by prescriptive micromanagement or ambiguous accusations of theoretical harm before new services have had a chance to be used.

Conclusion

Despite these concerns, AIRIA represents an important counterbalance to recent proposals from the Biden administration and others in Congress, which tend to favor more onerous regulatory policies for AI that could hobble U.S. innovation and global competitiveness in this vitally important technology sector. Extreme approaches to algorithmic regulation are also unlikely to win enough support to advance in Congress. A more moderate approach to AI governance is needed in order for a broad-based measure to move forward, and AIRIA provides the foundation for a more pragmatic AI governance model with a better chance of advancing than other comprehensive but overly regulatory proposals.

If the measure advances, however, it is important that it remains rooted in a risk-based, multi-stakeholder approach to tech governance that focuses on flexible best practices and does not devolve into a more cumbersome regulatory regime for AI technologies.