In a June 21 speech, Senate Majority Leader Chuck Schumer outlined his long-awaited “SAFE Innovation Framework” for artificial intelligence (AI) regulation. While stressing that “our framework must never lose sight of what must be our north star—innovation,” Schumer also sketched out a set of ideas for “regulating how AI is developed, audited, and deployed” to avoid what he called “a total free for all.” He said “an all-of-the-above approach” is needed to address a broad range of concerns, which suggests that a lot of AI regulation could be forthcoming.

Schumer’s address is a major moment in the growing battle over AI policy because it represents a push from the top ranks of Congress for a broad-based legislative framework for algorithmic systems and applications. His proposed policies make it clear that the United States might be abandoning the “permissionless innovation” policy vision that made America a global digital powerhouse.

He also said that the traditional legislative policymaking process is incapable of crafting law for fast-moving emerging tech like AI, meaning that “Congress will also need to invent a new process to develop the right policies to implement our framework.” Schumer aims to address this problem through “AI Insight Forums,” which will bring together “the top minds in artificial intelligence” to do “years of work in a matter of months,” and then advise Congress how to proceed.

With this speech, Schumer has signaled a potential sea change in the way the United States will regulate AI and perhaps many other emerging technologies going forward.

The Move Toward Permission Slip Regulation

The proposed “SAFE Innovation Framework” first addressed what sort of AI regulation Schumer thinks is needed. He played up the importance of AI and the need to “exercise humility as we proceed” making policy for it, saying that AI represents the “next era of human advancement” and that “we must come up with a plan that encourages—not stifles—innovation in this new world.”

His professed humility and desire to advance innovation did not stop him from outlining a broad-based plan for regulating AI, however. The four letters in Schumer’s “SAFE Innovation Framework” stand for Security, Accountability, Foundations and Explainability. Each principle contained many additional objectives. Security, for example, includes workforce issues and national security, among others. Accountability includes a wide-ranging list of concerns such as kids and advertising, business fraud, racial bias, intellectual property and more. Foundations deals mostly with concerns about disinformation and election security, but Schumer said it also covers anything that might “undermine our democratic foundations.” The final pillar of Schumer’s regulatory framework—explainability—addresses the transparency of algorithmic systems. He wants a “solution that Congress can use to break open AI’s black box.”

This ambitious “all-of-the-above approach” will likely end with a lot more red tape, bureaucracy and permission slip-oriented regulation. Schumer argues that, “if the government doesn’t step in, who will fill its place?” His question ignores two facts. First, AI governance is not happening in a vacuum. As R Street Institute research has documented, a robust set of governance frameworks have already been developed to cover AI systems and applications. Many organizations and experts have worked together across the globe to professionalize the process of AI “ethics by design” through sophisticated best-practice frameworks, standards and more.

Meanwhile, algorithmic and robotic systems are already, or soon will be, regulated by many government agencies and bodies of law. Across the federal government’s 434 departments, many agencies have already started to consider how they might address AI and robotics. Activity has been percolating at the Federal Trade Commission, the Food and Drug Administration, the National Highway Traffic Safety Administration, the Consumer Financial Protection Bureau, the Equal Employment Opportunity Commission and the Consumer Product Safety Commission, among others.

These agencies possess various regulatory powers that will cover algorithmic systems. These regulations include consumer protection and anti-fraud policies, defective product recall authority and other sector-specific safety rules. The National Institute of Standards and Technology (NIST) has developed an important new “AI Risk Management Framework” meant to help developers better identify and address various types of potential algorithmic risk. This framework, which builds on a previous NIST cybersecurity framework, is a voluntary set of guidelines “designed to be responsive to new risks as they emerge” instead of attempting to itemize them all in advance.

This federal activity is supplemented by state and local governments, which have many overlapping consumer protection laws and other targeted statutes that govern algorithmic systems. Finally, the courts and our common law system will evolve to address novel AI problems using product liability; negligence; design defects law; breach of warranty; property law and contract law; and other torts.

Schumer ignored all these governance mechanisms in his speech and erroneously suggested that “a total free for all” would exist without new top-down AI regulations or agencies being piled on this already comprehensive governance framework. While some additional policies may be needed to fill gaps in current law, we should first tap existing authority before just adding still more bureaucracy and regulation to the mix. In some cases, certain algorithmic or autonomous systems may already be regulated too aggressively by some federal agencies, thus limiting important innovations.

Any AI policy needs to be risk-based, context-specific and focused on outcomes instead of system inputs or design. This is what is particularly problematic about Schumer’s call for algorithmic explainability, which sounds sensible in theory, but is challenging in practice. Schumer admitted that explainability is “one of the thorniest and most technically complicated issues,” because “[f]orcing companies to reveal their IP would be harmful, it would stifle innovation, and it would empower our adversaries to use them for ill.”

As a recent R Street filing argued, explainability actually raises even more issues than that because, “[i]f policy is based on making AI perfectly transparent or explainable before anything launches, then innovation will suffer because of bureaucratic delays and costly compliance burdens.” A certain degree of transparency is sensible, but it is simply not possible to make algorithms perfectly explainable. Efforts to mandate explainability will become highly cumbersome and controversial both technically and politically.

A New Approach to Tech Policymaking

The precise details of Schumer’s regulatory framework remain unclear because he stressed that a new process is needed to fill them in. He hopes his AI Insight Forums will help congressional lawmakers address tech policy issues more rapidly than they otherwise could. “If we take the typical path—holding congressional hearings with opening statements and each member asking questions five minutes at a time, on different issues—we simply won’t be able to come up with the right policies,” Schumer said. “By the time we act, AI will have evolved into something new. This will not do. A new approach is required.”

This is an astonishing statement for the leader of the “the world’s greatest deliberative body” to make, but it accurately reflects a reality of modern legislative techno-politics: Congress has become incapable of finalizing major technology policy measures. Consider how—despite widespread bipartisan agreement and years of work—efforts to create a federal legislative framework for both data privacy and driverless cars have still not been finalized by Congress. The failure of driverless car legislation is particularly instructive because autonomous vehicles represent a subset of AI policy, and it should have been easier for Congress to advance this more narrowly drawn, context-specific measure. Yet special interest opposition from truck drivers and trial lawyers has largely made that impossible. A broader bill attempting to cover all autonomous or algorithmic systems would likely face even more opposition.

Thus, while saying that “committees must continue to be the key drivers of Congress’ AI policy response,” Schumer called for “a new and unique approach to developing AI legislation” through his Insight Forums. Because AI is “deeper in its complexity” and also evolving faster than past technologies, Schumer believes that Congress must get “the best of the best sitting at the table… all together in one room, doing years of work in a matter of months.” The hope is that a diverse set of expert interests will be able to quickly forge consensus around issues that congressional lawmakers cannot seem to achieve.

What Schumer is describing is a variant of what is often called multistakeholderism, which is a collaborative governance model that has been used widely within information technology sectors. Multistakeholder efforts have been a central feature of internet governance from the start, with a wide variety of institutions working together to create standards, norms, and best practices for various digital systems and applications. While government bodies sometimes play a role in multistakeholder processes, it has typically been focused more on helping to convene dialogues in the hope that the various parties hammer out agreements and standards in a collaborative, flexible and mostly voluntary fashion. This is also sometimes referred to as “soft law” governance.

In practice, these AI Insight Forums represent something quite different than what Schumer has proposed. They are more akin to congressionally appointed expert advisory committees created with the express intent of formulating formal legislation and filling in the details for how Congress should regulate specific technological systems and applications. Perhaps some consensus will come out of this process, but these new Insight Forums are not going to make traditional policymaking problems go away. Many different special interest groups and regulatory advocates will be clamoring for a seat at the table. Meanwhile, many other AI bills have already been introduced in this session and more are likely coming as almost every congressional committee lines up to take a stab at AI policy.

The passage of large tech legislative measures is challenged by other political realities. Even though tech policy feels increasingly important, it still does not get as much attention as other major policy issues—budget, taxes, environment, national security, etc.—which eat up far more legislative time. There is also an abbreviated legislative calendar this year due to another upcoming election cycle. For these reasons, it is unlikely that a comprehensive AI act or new agency will be implemented in the near term.

Targeted Efforts Have a Greater Chance

Schumer’s everything-but-the-kitchen-sink approach to AI will have appeal to many lawmakers and policy activists who want their concerns addressed in a holistic fashion. But it is unlikely to work because it simply introduces even more veto points into the lawmaking process as a larger number of lawmakers and special interests will either look to block progress on narrow matters, or advance extraneous things by attaching them to the effort.

If Congress hopes to get anything done at all on AI policy, lawmakers will have to be willing to break the issue down into much smaller components and focus on tractable objectives. It would be easier for lawmakers to address more targeted goals in stand-alone bills, such as proposals to keep AI away from nuclear weapons launch systems or other critical public infrastructure; disclosure for AI-generated political advertising; limits on so-called “predictive policing” algorithmic applications or the use of facial recognition tools by law enforcement bodies; or even measures to promote more robust supercomputing labs and systems and other research and development efforts.

Regardless, as the process unfolds, there are a few important realities and principles that policymakers must keep in mind. The governance vision America chooses for AI and robotics will have profound ramifications for our nation’s innovative capacity, global competitiveness and geopolitical standing. With China and other nations looking to greatly expand their own algorithmic and supercomputing capabilities, the United States must create a positive innovation culture if it hopes to prosper economically and foster a safer, more secure technological base that will ensure the nation is prepared for the computational revolution.

As Congress considers AI governance issues, lawmakers are right to expect that a culture of AI safety by design exists and that algorithmic developers are held to high standards. But there is an equally compelling interest in ensuring that algorithmic innovations are developed and made widely available to society to advance life-enriching and even life-saving goods and services. Policymakers must never forget that the greatest of all AI risks would be shutting down AI advances altogether. Our nation can achieve AI safety without innovation-crushing top-down mandates, cumbersome licensing schemes and all-encompassing new bureaucracies. The goal of AI policy should be for policymakers and innovators to work together to find flexible, iterative, bottom-up governance solutions over time.

Get the latest in AI policy right in your inbox. Sign up for the R Street newsletter today.

More AI Policy

View all