Today, the U.S. Chamber of Commerce Commission on Artificial Intelligence Competitiveness, Inclusion, and Innovation (AI Commission) released a major report on the policy considerations surrounding AI, machine learning (ML) and algorithmic systems. The commission’s 120-page report concluded that “AI technology offers great hope for increasing economic opportunity, boosting incomes, speeding life science research at reduced costs, and simplifying the lives of consumers.” 

Enlightened public policy can help advance those objectives while also addressing the various concerns raised about new AI and ML technologies. “The United States, through its technological advantages, well-developed system of individual rights, advanced legal system, and interlocking alliances with democracies, is uniquely situated to lead this effort,” the report argues.

A Flexible, Bipartisan Approach to AI Policy

The Chamber’s AI Commission was formed in early 2022 to “provide independent, bipartisan recommendations to aid policymakers” on AI policy “as it relates to regulation, international research and development competitiveness, and future jobs.” The effort was co-chaired by former Reps. John Delaney (D-Md.) and Mike Ferguson (R-N.J.). It was my honor to serve as one of the commissioners on the AI Commission and contribute to the report. 

The AI Commission held several field hearings around the United States and one abroad to explore these issues and we heard testimony and fielded comments from a diverse array of experts and various interested parties. I greatly appreciated the chance to learn first-hand from experts how AI is already improving important professions and public services.

For example, the Commission traveled to Cleveland last April where we heard from Cleveland Clinic doctors and scientists about the remarkable ways they were already using machine learning and AI technologies to treat patients and advance public health goals. They told us how algorithmic tools were being used to detect irregular heartbeats, to help with early stroke detection and to diagnose degenerative brain diseases, such as Alzheimer’s, dementia and Parkinson’s. 

Dr. Tom Mihaljevic, CEO and President at the Cleveland Clinic, told the Commission about how ML is revolutionizing public health. He noted that when he started practicing medicine in the 1980s, the overall body of medical information doubled roughly every seven years, but today it is doubling every 73 days. That is an astonishing amount of information to process and Dr. Mihaljevic pointed out that the only way to take full advantage of all of it is with the power of AI and ML capabilities. He also explained how AI would be crucial in improving remote and home-based medical care and would help address the needs of a rapidly aging population, regardless of where they live. The Commission heard from experts in many other fields and professions who had similar stories about how AI was powering important innovations and improving human well-being in other ways. 

Addressing Risks without Derailing Innovation

Of course, AI/ML raises some risks, and the AI Commission heard from others who worried about how new algorithmic technologies might affect individuals and societies. The issues include privacy, security, physical safety, discrimination and bias, disinformation and more. 

The Commission’s report looks to address these concerns in a balanced fashion. In the past, I have worked with several other blue-ribbon task forces devoted to balancing online safety and free speech issues. If there is one thing I learned from those efforts, and now from my experience on the U.S. Chamber AI Commission, it is that there is no silver-bullet, one-size-fits-all solution to all the complex “socio-technical” concerns surrounding emerging technologies like social media and now AI. 

The challenges are ever-changing because entrepreneurs are constantly innovating and developing exciting new applications. Meanwhile, consumer demand and public needs are constantly evolving and expanding. In the past, product life cycles and changes in public use of analog era tech (print media, broadcasting, cable, etc.) could be measured in years or even decades. In the digital era, by contrast, that cycle is hyper-compressed and now plays out over weeks and months. 

Principles to Guide AI Policy

As the digital revolution morphs into the computational revolution, we will be confronted with almost daily developments that both astound but also scare us. In this environment, a nimble and flexible approach to algorithmic governance will be essential. The AI Commission report stresses the importance of certain policy “pillars,” or guiding values, including: efficiency, neutrality, proportionality, collaboration and flexibility. Risk balancing and resiliency-building efforts will be crucial to achieving these goals. To do so, the AI Commission identified “Five Key Principles for AI Regulation:”

(1) Evaluate applicability of existing law and regulation.

(2) Fill gaps in existing law while avoiding statutory and regulatory overreach.

(3) Assume a risk-based approach in AI regulation and enforcement.

(4) Distribute but coordinate AI regulation.

(5) Encourage private sector approaches to risk assessment and innovation.

The Commission noted that the government has an important role in guiding the development of algorithmic systems, but the report “recommends an ‘as-necessary’ framing for the government role to allow for more flexibility as technology advances. The government, organizations, and citizens are ill-served when laws or regulations are passed only to become immediately outdated.”

In this way, the Chamber AI Commission report nicely complements the approach sketched out in the National Institute of Standards and Technology (NIST)’s new AI Risk Management Framework (AI RMF), which was just finalized in January. The NIST AI RMF is a multistakeholder-led, consensus-driven, voluntary effort that established a process for thinking about AI risk definitions, trade-offs and best practices. Both the NIST AI RMF and the new Chamber AI Commission report stress the need to be responsive to new risks as they emerge. “This flexibility is particularly important when impacts are not easily foreseeable and applications are evolving,” NIST argues. “While some AI risks and benefits are well-known, it can be challenging to assess negative impacts and the degree of harms.”

The Need for Humility

This humility about our ability to forecast the future is crucial. America needs a governance approach that is agile and adaptable in the face of uncertainty. AI policy cannot be created through the sort of static, top-down regulations that were often imposed on past technology sectors. Real-time, iterative governance efforts will be needed to address issues as they develop.

This is why NIST has made it clear that the AI RMF is a dynamic framework, going so far as to version it like computer software (i.e., “Version 1.0”). This reflects the need to be responsive to fast-moving developments in the field and find ways to devise balanced solutions on the fly. AI risk management will also be highly context-specific, and new issues will arise that will be hard to envision today. Agency guidelines, industry technical standards and multistakeholder-driven consensus best practices will need to evolve over time and be buttressed by existing rules, the courts and potentially some targeted new policies. This is sometimes referred to as “soft law” governance. Some federal regulatory agencies have already tapped this approach to address algorithmic concerns. For example, the U.S. Department of Commerce has released a series of multiple versioned guidance documents for driverless cars that reflects a voluntary, bottom-up, consensus-driven process.

In addition to addressing concerns about algorithmic risk, the Chamber AI Commission report also discusses the need to prepare the workforce of the future and consider a variety of new educational approaches and training and reskilling efforts. It also stresses the importance of attracting skilled talent from abroad through wise immigration policies. 

A newly released R Street Institute study asks, “Can We Predict the Jobs and Skills Needed for the AI Era?” and discusses the track record of past government retraining and reskilling efforts. The study argues that policymakers will need to think creatively about these efforts because past programs often failed to identify future workforce trends and needs accurately. Once again, flexibility will be paramount.  

A Uniquely American Policy Framework

Finally, the Chamber AI Commission report also considers some of the thorny intellectual property and national security questions surrounding artificial intelligence. These issues have taken on greater significance as the European Union, China and many other nations look to advance their algorithmic capabilities. 

The combination of the NIST AI RMF and the U.S. Chamber AI Commission report offer a constructive, consensus-driven framework for algorithmic governance rooted in flexibility, collaboration and iterative policymaking. This represents the uniquely American approach to AI policy that avoids the more heavy-handed regulatory approaches seen in other countries and it can help the United States again be a global leader in an important new technological field.  

(Photo credit: Gorodenkoff)