The Most Important Principle for AI Regulation
What is the best way to advance the artificial intelligence (AI) revolution while also addressing the risks surrounding algorithmic technologies? Writing in The Ripon Forum this month, Rep. Jay Obernolte (R-Calif.) argues for a principled approach to AI governance based on “the values of freedom and entrepreneurship,” instead of “government control of technology, and the anti-democratization of knowledge.”
Obernolte, the only member of Congress with a graduate degree in AI, is exactly right. Unfortunately, however, the current AI policy dialogue is heading in the opposite direction as “calls for regulation in the United States and across the globe have reached a fever pitch from both government and academia.”
Panicked rhetoric and extreme proposals are now commonplace in AI policy debates. At a Senate Judiciary Committee hearing last month, one lawmaker suggested that we should begin with the assumption that AI wants to kill us, while other lawmakers and witnesses recited a litany of hypothetical worst-case scenarios. Lawmakers are already floating a new technocratic agency and licensing schemes for AI, among other heavy-handed regulatory proposals.
The Importance of Innovation Culture
We need to reset the debate over AI and work toward common-sense policies that are not rooted in fear of the future. Today’s talk of new command-and-control regimes and bureaucracies is counter-productive because such a regime could undermine the enormous benefits of algorithmic systems and negatively affect America’s global competitiveness in the unfolding computational revolution.
“Part of the brilliance of America’s technology industry over these many years is that it has been allowed to flourish in a largely unregulated environment,” Obernolte argues. “This has given our nation the flexibility to remain agile and on the cutting edge of modern innovation, without the interference of burdensome regulations that could have at many stages shut the industry down for good. It has catalyzed our leadership in the field over countries in Europe, Great Britain, and most of Asia,” he correctly concludes.
Put simply, the United States got its innovation culture right for the internet and digital technology, and the nation must now do the same for AI, machine learning (ML) and robotics. As a recent R Street study outlined, the key will be for America to adopt “Flexible, Pro-Innovation Governance Strategies for Artificial Intelligence.” Silver-bullet solutions do not exist, and innovation-crushing bureaucracies are not the place to start.
The Right Principle for Regulation
To address the risks associated with some AI applications, we need to use measured, context-specific governance solutions. Rep. Obernolte identifies how to strike the right balance when arguing that policymakers must avoid AI mandates “that stifle innovation by focusing on mechanisms instead of on outcomes.” What he means, as I elaborated in a new R Street filing to the Department of Commerce, is that “AI governance should be risk-based and should focus on system outcomes instead of system inputs or design.” A recent report from the Center for Data Innovation summarizes this principle: “Regulate performance, not process.”
This is quickly becoming the key issue in AI policy debates. Many regulatory advocates call for layers of preemptive, precautionary regulations for the underlying data sets, models and computational systems involved in creating new algorithmic products (i.e., the inputs or mechanisms on the process side of algorithmic systems). Their goal is to make algorithmic systems more “explainable” and make sure each part of the code is well understood.
Unfortunately, as noted in a new Federalist Society essay, “explainability is easier in theory than reality,” and converting this principle into a convoluted regulatory process will mean that algorithmic innovation is essentially treated as guilty until proven innocent. A process-oriented regulatory regime in which all the underlying mechanisms are subjected to endless inspection and micromanagement will create endless innovation veto points, politicization, delays and other uncertainties because it will mostly just be a guessing game based on hypothetical worst-case thinking.
We need the opposite approach that Rep. Obernolte identified, which is focused on algorithmic outcomes. What really matters is that AI and robotic technologies perform as they are supposed to and do so in a generally safe manner. A governance regime focused on outcomes and performance treats algorithmic innovations as innocent until proven guilty and relies on actual evidence of harm and tailored, context-specific solutions to it. This principle is the key to balancing entrepreneurship and safety for AI.
AI Is Already Being Extensively Regulated by Many Government Agencies
Importantly, regulating AI outcomes or performance is already being done through many existing statutes, agency regulations and court-based mechanisms. Too many people assume that AI and ML tech is developing in a state of anarchy when, in reality, algorithmic systems and applications are already governed by a wide variety of policies that address concrete issues in real-time. For example:
- In January, the National Institute of Standards and Technology released its “AI Risk Management Framework,” which was created through a multi-year, multi-stakeholder process. It is intended to help developers and policymakers better understand how to identify and address various types of potential algorithmic risk.
- The Food and Drug Administration (FDA) has been using its broad regulatory powers to review and approve AI and ML-enabled medical devices for many years already, and the agency possesses broad recall authority that can address risks that develop from algorithmic or robotic systems. The FDA is currently refining its approach to AI/ML in a major proceeding.
- The National Highway Traffic Safety Administration (NHTSA) has been issuing constant revisions to its driverless car policy guidelines since 2016. Like the FDA, the NHTSA also has broad recall authority, which it used in February 2023 to mandate a recall of Tesla’s full self-driving autonomous driving system, also requiring an over-the-air software update to over 300,000 vehicles that had the software package.
- In 2021, the Consumer Product Safety Commission agency issued a major report highlighting the many policy tools it already has to address AI risks. Like the FDA and NHTSA, the agency has recall authority that can address risks that develop from consumer-facing algorithmic or robotic systems.
- In April, Securities and Exchange Commission Chairman Gary Gensler told Congress that his agency is moving to address AI and predictive data analytics in finance and investing.
- The Federal Trade Commission (FTC) has become increasingly active on AI policy issues and has noted in a series of recent blog posts that the agency is ready to use its broad authority to “unfair and deceptive practices,” involving algorithmic claims or applications.
- The Equal Employment Opportunity Commission (EEOC) recently released a memo as part of its “ongoing effort to help ensure that the use of new technologies complies with federal [equal employment opportunity] law.” It outlines how existing employment antidiscrimination laws and policies cover algorithmic technologies.
- Along with the EEOC, the FTC and the Consumer Financial Protection Bureau, the Civil Rights Division of the Department of Justice released an April joint statement saying that the agency heads said that they would be looking to take preemptive steps to address algorithmic discrimination.
This is real-time algorithmic governance in action. Additional regulatory steps may be needed later to fill gaps in current law, but policymakers should begin by acknowledging that a lot of algorithmic oversight authority exists across the federal government’s 434 current agencies or departments, and that many of those bodies are already actively considering how to address AI and robotics policy. In some cases, agencies might already be regulating some autonomous systems too aggressively, as seems to be the case with the Federal Aviation Administration, which has been very slow to allow unmanned commercial drones to take off.
All this regulatory capacity makes it clear that the United States does not need a new technocratic bureaucracy to cover all-things-AI when so many laws, agencies and regulations already exist. The country never had a Consumer Electronics Agency, a Federal Computer Commission or a Bureau of Internet Control, for example. But it would be silly to say that consumer electronics, computers or the internet operate in a state of complete lawlessness. Instead, a wide variety of laws apply, and many issues are also adjudicated in the courts under common law standards, such as product liability; negligence; design defects law; failure to warn; breach of warranty; property law and contract law; and other torts. This same framework can help address AI risks.
Conclusion: Getting the Balance Right
The United States must get its governance balance right for AI. The stakes are very high, as Rep. Obernolte concludes, noting how “legislators in the [European Union] have moved too aggressively” and are “arbitrarily halting the development of artificial intelligence within their borders and allowing the rest of the world to progress while they lag behind.” In this way, AI policy has geopolitical significance. Obernolte says that EU-style regulation “would be particularly harmful in the United States,” particularly as the country faces threats from China and other nations which are racing to keep up. Our nation must not shoot itself in the foot as this race intensifies.