On Tuesday morning, the Senate Judiciary Committee Subcommittee on Privacy, Technology, and the Law is holding a hearing on, “Oversight of A.I.: Rules for Artificial Intelligence.” Among those testifying is Sam Altman, CEO of OpenAI—the company that created ChatGPT, the artificial intelligence (AI) chatbot that became the fastest-growing app in history earlier this year.

If this hearing plays out like many other recent tech policy hearings, the session could be heavy on AI “doomerism” (i.e., dystopian rhetoric and pessimistic forecasts) and be accompanied by the usual grievance politics about “Big Tech” more generally. Congressional hearings on tech policy matters have become increasingly angry affairs, as “outraged congressional members raise their voices and wag their fingers at cowed tech executives” to play to the cameras and their political bases.

OpenAI and other AI developers can expect to be called to Washington for regular public floggings now. Algorithmic innovators will also be asked to make many “voluntary” concessions to lawmakers to both sides of the aisle, even though the political left and right want largely different things from tech companies. 

Hopefully, things don’t play out this way and a serious discussion ensues about AI policy that avoids shouting, threats and doomsday rhetoric. Lawmakers need to realize that we are living through a profoundly important moment and one that the United States must be prepared for if the nation hopes to continue its technology leadership role globally.

Here’s what Sam Altman should say to set the right tone for this and future AI policy hearings.


Senators, thank you for the opportunity to come before you today to discuss what could become the most important technological development of our lifetimes: the computational revolution that combines the power of AI, machine learning, advanced robotics and quantum computing.

The Potentially Profound Benefits of AI

These technologies are rapidly transforming many sectors and professions such as medicine and health care, financial services, transportation, retail, agriculture, entertainment, energy, aviation, the automotive industry and countless others. The potential exists for AI to drive explosive economic growth and productivity gains. A 2018 McKinsey study predicted an additional $13 trillion global economic activity by 2030, “or about 16 percent higher cumulative GDP compared with today.”

But it is what AI will mean for every American that matters most. The benefits to our living standards will be enormous. AI has the ability to help us improve our health, extend our lives, expand transportation options, avoid accidents, improve community safety, enhance educational opportunities, access superior financial services and much more. AI-driven robotic systems will also assist with many dangerous jobs, thus making many workplaces much safer. There is a compelling public interest in ensuring that algorithmic innovations are developed and made widely available to society.

There are risks associated with AI, too. But as we look to address them, we should keep three objectives in mind.

Understand the Greatest of All Risk is Stopping AI Progress

First, we must avoid overly burdensome technology regulations that can undermine AI’s benefits to the public and the nation as a whole. It is essential that the United States be a leader in AI to ensure our continued global competitive standing and geopolitical security.

The most important way to counter China and other nations attempting to overtake us on this front is to make sure we do not follow their lead in terms of heavy-handed control of digital systems. America’s crucial advantage over other countries comes down to our uniquely agile and adaptive approach to technological governance that is rooted in a general freedom to innovate that is accompanied by a diversity of ex-post policy solutions to address problems that develop. This more iterative, bottom-up governance approach not only gives the public more options, but it provides our nation with a safer and more secure technological base.

Over the past quarter century, America’s computing and digital technology sectors became, “a growth powerhouse” that drove “remarkable gains, powering real economic growth and employment,” as Brookings scholars have summarized. The economic benefits were staggering. According to the Bureau of Economic Analysis, in 2021, “the U.S. digital economy accounted for $3.7 trillion of gross output, $2.41 trillion of value added (translating to 10.3 percent of U.S. gross domestic product, $1.24 trillion of compensation, and 8.0 million jobs).” In the process of generating all that economic activity, U.S. tech companies became household names across the globe, attracting talent and investment to our shores. Almost half of the top 100 digital tech firms in the world with the most employees are U.S. companies, and 18 of the world’s top 25 tech companies by market cap are U.S.-based firms.

We should be proud of this success story and remember that it was the product of smart policy toward digital technology and the internet. By contrast, across the Atlantic, heavy-handed European Union (EU) regulation decimated digital innovation and weakened the continent’s technological base relative to other nations. Europe became “The Biggest Loser” on the digital tech front, as a magazine survey of 11 experts concluded recently. Incredibly, the EU is now in the process of doubling down on this convoluted, compliance-heavy regulatory regime by applying it to algorithmic systems, and then they want to export it to the world.

U.S. policymakers should all agree that we do not want America to face Europe’s predicament or compromise the benefits that could flow from the computational revolution. We must, therefore, reject that misguided policy approach when considering AI governance.

Consider How Existing Policy Systems and Solutions Can Help

Second, as policymakers do look to address potential algorithmic risks, they should begin by tapping the extensive existing state capacity that already exists. The notion that AI exists in a completely unregulated vacuum is false, and Congress should ask the White House Office of Science and Technology Policy to initiate a comprehensive review of all the existing agencies, policies and systems that can or already do govern AI and robotic systems. 

The U.S. federal government alone has over 2.1 million civilian employees working at 15 Cabinet agencies, 50 independent federal commissions, and over 430 federal departments altogether. Agencies like the Federal Trade Commission, the Food and Drug Administration, the National Highway Traffic Safety Administration, the Equal Employment Opportunity Commission, the Consumer Product Safety Commission and many others have already made moves to consider how they might address AI and robotics. On top of all this federal activity, state and local governments possess many overlapping consumer protection laws and other targeted statutes that govern algorithmic systems. We should also not ignore how the courts and our common law system will evolve to address novel AI problems.

While lawmakers worry about AI being under-regulated, they may need to consider the possibility of the opposite problem of America’s AI innovators being subjected to “death by a thousand cuts,” or a thicket of existing agencies or policies that hinder the development of AI or limit competition and worker opportunities. It is particularly important to relax barriers to labor mobility and employment flexibility, especially occupational licensing rules, to ensure workers can adjust more quickly to market disruptions.

Fill Policy Gaps with Flexible Governance Solutions

Third, after policymakers conduct a thorough review of existing policies covering AI, we can better identify where gaps may exist that require additional policy steps. However, as solutions are proposed, we must identify the trade-offs associated with each of them.

AI risks are highly nuanced and context specific. For example, policies for driverless cars, drones, medical software, digital hiring tools, and defense and policing systems all entail different issues and necessitate different solutions—many of which are best pursued using existing policy authority. Overly broad, one-size-fits-all mandates will not work.

AI governance should be risk-based and focus on system outcomes, instead of being preoccupied with system inputs or design. In other words, policy should concern itself more with actual algorithmic performance, not the underlying processes. If policy is based on making AI perfectly transparent or explainable before anything launches, then innovation will suffer because of endless bureaucratic delays and paperwork compliance burdens.

The National Institute of Standards and Technology (NIST) has created an important new “AI Risk Management Framework that is meant to help developers and policymakers better understand how to identify and address various types of potential algorithmic risk. This framework, which builds on a previous NIST multi-stakeholder effort on cybersecurity risk, is a voluntary set of guidelines “designed to be responsive to new risks as they emerge” instead of attempting to itemize them all in advance. The agency notes that “[t]his flexibility is particularly important where impacts are not easily foreseeable and applications are evolving.” NIST correctly concludes that, while some of the risks of AI are well understood, assessing the degree of actual harm associated with some of them can be challenging due to measurement issues or different conceptions of what constitutes harm.

AI policy should encourage the private sector to refine best practices and ethical guidelines continuously for algorithmic technologies. An extensive amount of AI safety work has already been done by professional and academic associations that is rooted in the idea of “ethics by design” and the idea of keeping humans “in the loop” at critical stages of the development process. These are wise principles, but they need not always be imposed in a highly regulatory, top-down fashion. Instead of trying to create an expensive and cumbersome new regulatory bureaucracy for AI, the easier approach is to have NIST and the National Telecommunications and Information Administration form a standing committee that brings parties together as needed to address concerns.

This more bottom-up and agile governance approach can go a long way toward helping to promote a culture of responsibility among leading AI innovators, and it represents a better way of balancing safety and innovation for complex, rapidly evolving technologies.

In closing, we must not forget that our policy disposition toward AI and algorithmic systems will play an important role in America’s relative global competitive and geopolitical standing as China and other nations race to catch up to us. The United States must create a positive innovation culture if it hopes to prosper economically and ensure a safer, more secure technological base that will ensure the nation is prepared for the computational revolution.

Stay in the know. Sign up for R Street’s newsletters today.

More Artificial Intelligence Policy