March 8, 2023

The Honorable Gary Peters
Chair
Homeland Security and Governmental Affairs Committee
U.S. Senate
Washington, D.C.  20510

The Honorable Rand Paul
Ranking Member
Homeland Security and Governmental Affairs Committee
U.S. Senate
Washington, D.C.  20510

Dear Chairman Peters, Ranking Member Paul and members of the Committee:

Thank you for your decision to hold a hearing on March 8, 2023 titled, “Artificial Intelligence: Risks and Opportunities”. My name is Adam Thierer and I am a senior fellow at the R Street Institute. I also currently serve as a commissioner on the U.S. Chamber of Commerce’s Commission on Artificial Intelligence Competitiveness, Inclusion, and Innovation, which will be releasing its final report tomorrow morning.[1] 

It is essential that the United States be a leader in AI to ensure our continued global competitive standing and geopolitical security. The most important way to counter China, Europe and other nations attempting to overtake U.S. innovation on this front is to make sure we do not follow their lead in terms of heavy-handed control of digital systems. America’s crucial advantage over other countries comes down to our uniquely agile and adaptive approach to technological governance.

As I noted in a recent piece for the R Street Institute:

The European Union (EU) has implemented a wide variety of data collection mandates that have restricted innovation and competition across the continent. These regulatory burdens have left the EU with few homegrown information technology firms. As a result, the EU now mostly focuses on exporting its mandates globally…[2]

According to the Bureau of Economic Analysis, in 2021, the U.S. digital economy accounted for $3.7 trillion of gross output, $2.41 trillion of value added (or 10.3 percent of U.S. GDP), $1.24 trillion of compensation and 8 million jobs.[3] Globally, 18 of the world’s top 25 digital tech companies by market capitalization are U.S.-based firms, and 46 of the top 100 firms with the most employees are U.S. companies.[4]

The American economic success story was driven by smart, bipartisan choices that Congress and the Clinton administration made in the 1990s. There are four key ingredients behind America’s successful approach to digital innovation:

  1. The first is freedom to innovate by default. Entrepreneurs were given a green light to experiment with bold new ideas without having to seek permission to innovate.
  2. The second is world-class university programs and research labs. The United States is home to some of the world’s leading technical educational programs that have produced much of the best talent in digital technology markets today.
  3. The third factor is openness to global talent and investment. The United States opened its tech markets to skilled immigrants and global investors and they flocked here to enjoy the benefits of vibrant markets and our superior higher education institutions.
  4. The fourth factor is the use of ongoing multi-stakeholder negotiations and flexible regulatory responses when concerns develop. The National Telecommunications and Information Administration (NTIA) and other agencies have brought together diverse stakeholders repeatedly to hammer out solutions to complicated technology problems.[5]

These ingredients are the secret sauce that have powered America’s commanding lead in the internet and computing sectors. And now, they can help us lead the global AI race. The hard reality of AI governance is that it is going to be extremely difficult to establish any policy for algorithmic systems that is not quickly overtaken by fast-moving technological realities. There is no one-size-fits-all approach to AI that can preemptively plan for the challenges that we will face even a few months from now.

Government’s role should be focused on helping to convene different stakeholders and working toward consensus on best practices on an ongoing basis.[6] In this regard, the National Institute of Standards and Technology (NIST) has taken important steps with its recently released AI Risk Management Framework.[7]

This NIST framework, which builds on previous multi-stakeholder efforts, is meant to help AI developers better understand how to identify and address various types of potential algorithmic risk. NIST notes it “is designed to address new risks as they emerge” instead of attempting to itemize them all in advance.[8] “This flexibility is particularly important where impacts are not easily foreseeable and applications are evolving,” the agency explains.[9] Building on this, NIST and the NTIA can take the lead in extending their expertise in helping to convene ongoing multi-stakeholder efforts to bring diverse stakeholders to the table and hammer out consensus-driven best practices and solutions on the fly.

As this governance model for AI evolves, it should be guided by some key principles. Several of these recommendations are found in the U.S. Chamber of Commerce AI Commission report launching tomorrow.

First, AI governance should be risk-based and focus on system outcomes, instead of being preoccupied with system inputs or design. In other words, policy should concern itself more with actual algorithmic performance, not the underlying processes.[10] If policy is based on making AI perfectly transparent or explainable before anything launches, then innovation will suffer because of endless bureaucratic delays and paperwork compliance burdens.

Second, AI policy should utilize existing laws and remediesbefore adding new regulatory mandates. As noted, a vast array of laws and regulations already exist that can effectively govern algorithmic systems.

Third, AI policy should encourage the private sector to refine best practices and ethical guidelines continuously for algorithmic technologies. An extensive amount of work has already been done in this regard, but it will require constant vigilance and iteration to address emerging risks effectively. 

Thank you for holding this hearing. I look forward to addressing your questions.

Sincerely,

/s/Adam Thierer
Senior Fellow
R Street Institute


[1] “Artificial Intelligence Commission: Preparing for the Future,” U.S. Chamber of Commerce, last accessed March 3, 2023. https://www.uschamber.com/major-initiative/artificial-intelligence-commission.

[2] Adam Thierer, “Mapping the AI Policy Landscape Circa 2023: Seven Major Fault Lines,” R Street Institute, Feb. 9, 2023. https://www.rstreet.org/commentary/mapping-the-ai-policy-landscape-circa-2023-seven-major-fault-lines.

[3] Tina Highfill and Christopher Surfield, “New and Revised Statistics of the U.S. Digital Economy, 2005–2021,” Bureau of Economic Analysis, November 2022. https://www.bea.gov/system/files/2022-11/new-and-revised-statistics-of-the-us-digital-economy-2005-2021.pdf.

[4] “Largest tech companies by market cap,” Companies Market Cap, last accessed March 4, 2023. https://companiesmarketcap.com/tech/largest-tech-companies-by-market-cap.

[5] Ryan Hagemann et al., “Soft Law for Hard Problems: The Governance of Emerging Technologies in an Uncertain Future,” Colorado Technology Law Journal 17 (Feb. 5, 2018). https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3118539.

[6] Lawrence E. Strickling and Jonah Force Hill, “Multi-stakeholder internet governance: successes and opportunities,” Journal of Cyber Policy 2:3 (2017), pp. 298–99. https://www.tandfonline.com/doi/abs/10.1080/23738871.2017.1404619.

[7] National Institute of Standards and Technology, Artificial Intelligence Risk Management Framework (AI RMF 1.0), U.S. Department of Commerce, January 2023. https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf.

[8] Ibid., p.4

[9] Ibid., p. 4.

[10] Daniel Castro, “Ten Principles for Regulation That Does Not Harm AI Innovation,” Information Technology and Innovation Foundation, Feb. 8. 2023. https://itif.org/publications/2023/02/08/ten-principles-for-regulation-that-does-not-harm-ai-in