September 12, 2023

Dear Chairman Hickenlooper, Ranking Member Blackburn and members of the Subcommittee:

Thank you for your decision to hold a hearing on September 12, 2023 titled, “The Need for Transparency in Artificial Intelligence.” My name is Adam Thierer and I am a senior fellow at the R Street Institute. I also recently served as a commissioner on the U.S. Chamber of Commerce’s Commission on Artificial Intelligence Competitiveness, Inclusion, and Innovation.[1] 

Artificial Intelligence (AI) technologies are already all around us and they are helping make our lives better in many ways. But the potential for algorithmic systems is even greater and these technologies also have important ramifications for our country’s global competitive standing and geopolitical security.

The United States must reject the regulatory approaches being advanced by China, Europe and other nations, which are mostly rooted in a top-down, command-and-control approach to AI systems. Instead, America’s approach to technological governance must continue to be agile and adaptive because there is no one-size-fits-all approach to AI that can preemptively plan for the challenges that we will face even a short time from now.[2]

At this early stage of AI’s development, government’s role should be focused on helping developers work toward consensus best practices on an ongoing basis.[3] In this regard, the National Institute of Standards and Technology (NIST) has taken crucial steps with its AI Risk Management Framework, which is meant to help AI developers better understand how to identify and address various types of potential algorithmic risk.[4] NIST notes it “is designed to address new risks as they emerge” instead of attempting to itemize them all in advance.[5] “This flexibility is particularly important where impacts are not easily foreseeable and applications are evolving,” the agency explains.[6]

While it is always important to consider the dangers that new technologies could pose, extreme regulatory solutions are not warranted. Safety considerations are vital, but there is an equally compelling public interest in ensuring that algorithmic innovations are developed and made widely available to society.

Toward that end, AI governance should be risk-based and focus on system outcomes, instead of being preoccupied with system inputs or design.[7] In other words, policy should concern itself more with actual algorithmic performance, not the underlying processes. Transparency and explainability are important values that government can encourage, but these concepts must not be mandated in a rigid, overly prescriptive fashion.[8]

Algorithmic systems evolve at a very rapid pace and undergo constant iteration, with some systems being updated on a weekly or even daily basis.If policy is based on making AI perfectly transparent or explainable before anything launches, then innovation will suffer because of endless bureaucratic delays and paperwork compliance burdens. Society cannot wait years or even months for regulators to eventually get around to formally signing off on mandated algorithmic audits or impact assessments, many of which would be obsolete before they were completed.

Converting audits into a formal regulatory process would also create several veto points that opponents of AI advancement could use to slow progress in the field. AI innovation would likely grind to a halt in the face of lengthy delays, paperwork burdens and significant compliance costs. Algorithmic auditing will always be an inexact science because of the inherent subjectivity of the values being considered. Auditing algorithms is not like auditing an accounting ledger, where the numbers either add up or don’t. When evaluating algorithms, there are no scientific metrics that can quantify the scientifically correct amount of privacy, safety or security in a given system.

This means that legislatively mandated algorithmic auditing or explainability requirements could also give rise to the problem of significant political meddling in speech platforms powered by algorithms, which would raise free speech concerns. Mandated AI transparency or explainability could also create some intellectual property problems if trade secrets were revealed in the process.

This is why it is essential that America’s AI governance regime be more flexible, bottom-up, and driven by best practices and standards that evolve over time.[9] Beyond encouraging the private sector to continuously refine best practices and ethical guidelines for algorithmic technologies, government can utilize the vast array of laws and regulations that already exist before adding new regulatory mandates. The courts and our common law system stand ready to address novel risks that are unforeseeable in advance. Many agencies are also moving aggressively to consider how they might regulate AI systems that touch their fields. Using various existing regulatory tools and powers like product recall authority and unfair and deceptive practices law, agencies can already address algorithmic harms that are proven. We should not be adding another huge federal bureaucracy or burdensome licensing mandates to the mix until we have exhausted these other existing solutions.[10]

The United States must create a positive innovation culture if it hopes to prosper economically and ensure a safer, more secure technological base. Policymakers must not try to micro-manage the future or pre-determine market outcomes. It is essential that we strike the right policy balance as our nation faces serious competition from China and other nations who are looking to counter America’s early lead in computational systems and data-driven digital technologies.

Sincerely,

/s/

Adam Thierer

Senior Fellow

R Street Institute


[1] U.S. Chamber of Commerce, Commission on Artificial Intelligence Competitiveness, Inclusion, and Innovation: Report and Recommendations (March 2023). https://www.uschamber.com/technology/artificial-intelligence-commission-report.

[2] Adam Thierer, “Getting AI Innovation Culture Right,” R Street Institute Policy Study No. 281 (March 2023). https://www.rstreet.org/research/getting-ai-innovation-culture-right.

[3] Lawrence E. Strickling and Jonah Force Hill, “Multi-stakeholder internet governance: successes and opportunities,” Journal of Cyber Policy 2:3 (2017), pp. 298–99. https://www.tandfonline.com/doi/abs/10.1080/23738871.2017.1404619.

[4] National Institute of Standards and Technology, Artificial Intelligence Risk Management Framework (AI RMF 1.0), U.S. Department of Commerce, January 2023. https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf.

[5] Ibid., p.4

[6] Ibid., p. 4.

[7] Adam Thierer, “The Most Important Principle for AI Regulation,” R Street Institute Real Solutions, June 21, 2023. https://www.rstreet.org/commentary/the-most-important-principle-for-ai-regulation.

[8] Comments of Adam Thierer, R Street Institute to the National Telecommunications and Information Administration (NTIA) on “AI Accountability Policy,” June 12, 2023. https://www.rstreet.org/outreach/comments-of-the-r-street-institute-to-the-national-telecommunications-and-information-administration-ntia-on-ai-accountability-policy.

[9] Adam Thierer, “Flexible, Pro-Innovation Governance Strategies for Artificial Intelligence,” R Street Institute Policy Study No. 283 (April 2023). https://www.rstreet.org/research/flexible-pro-innovation-governance-strategies-for-artificial-intelligence.

[10] Neil Chilson and Adam Thierer, “The Problem with AI Licensing & an ‘FDA for Algorithms,’” Federalist Society Blog, June 5, 2023. https://fedsoc.org/commentary/fedsoc-blog/the-problem-with-ai-licensing-an-fda-for-algorithms.

Get the latest in artificial intelligence policy in your inbox. Sign up for the R Street newsletter.

More Artificial Intelligence Policy

View all