On Feb. 20, House Speaker Mike Johnson and Democratic Leader Hakeem Jeffries announced the creation of a new bipartisan Task Force on artificial intelligence (AI) that will be chaired by Rep. Jay Obernolte (R-Calif.) and Reps. Ted Lieu (D-Calif.). This task force “will seek to produce a comprehensive report that will include guiding principles, forward-looking recommendations and bipartisan policy proposals developed in consultation with committees of jurisdiction.”

Although other AI task forces have already been established, this latest one represents a sensible step forward for Congress, and the bipartisan nature of the new task force makes it even more important. The choice of Reps. Obernolte and Lieu to co-chair the task force is sensible because they have shown thoughtful leadership on AI policy issues and have been regular speakers on the issue at hearings and events.

It remains unclear whether this Congress can craft any AI policy with the legislative clock ticking fast during this election year. Nonetheless, Congress needs to create a governance framework for AI that both ensures that citizens can enjoy the enormous benefits associated with advanced algorithmic technologies and preserves the United States’ position as a world leader in digital technology as China and other nations race to catch up.

Toward that end, here are 10 principles the new congressional AI task force should consider to help craft a sensible AI policy framework for America.

1) Reiterate that the freedom to innovate remains the default for U.S. digital policy. U.S. policy should give entrepreneurs a green light to experiment with bold, new ideas without having to seek permission to innovate. What separates the United States from China, the European Union, and other governments is that our technology policy creates a positive innovation culture, not an innovation cage, for digital entrepreneurialism. We must keep it that way if we are to once again lead in the next great technological revolution. The new AI task force should build on the Clinton administration’s 1997 Framework for Global Electronic Commerce, which charted a principled, pro-innovation vision like this and inspired a generation of visionaries to become digital technology leaders in many different global technology sectors.

Recommended reading: Getting AI Innovation Culture Right

2) Ensure that AI policy remains rooted in a flexible, risk-based framework that relies on ongoing multistakeholder negotiations and evolutionary standards. Our approach must be able to closely match rapidly changing algorithmic technologies. The National Telecommunications and Information Administration and other agencies have brought together diverse stakeholders repeatedly to hammer out solutions to complicated digital technology problems, and the National Institute of Standards and Technology has taken important steps through its AI Risk Management Framework to help developers craft standards and solutions to address potential algorithmic risks in a flexible fashion. This is the agile and iterative governance model that America used for internet policy issues over the past quarter century. AI policy should continue to encourage private developers to work with other stakeholders to refine best practices and ethical guidelines for algorithmic technologies, without imposing heavy-handed government mandates preemptively. 

Recommended reading: Flexible, Pro-Innovation Governance Strategies for Artificial Intelligence

3) Target regulation by using a sectoral approach that breaks down AI policy into smaller, more manageable chunks. As its first order of business, the new AI task force should inventory the extensive regulatory capacity that already exists rather than attempt to develop an entirely new regulatory superstructure for AI. Former National Economic Director Bill Whyman, who spent his career at the intersection of finance, emerging technologies, and government policy, notes that, while the United States “is not likely to pass a broad national AI law over the next few years,” it is important to understand that, “no broad national law does not mean no regulation.” Indeed, this is already how digital policy works. America does not have a Federal Computer Commission for computing or the internet but instead relies on the wide variety of laws, regulations, and agencies that existed long before digital technologies came along. The federal government has 436 federal departments, and many of them are already aggressively investigating how to oversee AI developments in their areas. In fact, some of them may already be over-regulating AI technologies. If this new AI task force wants to help Congress get something done in the short term, it should recommend that federal policymakers break AI policy down into its smaller subcomponents and then prioritize among them. For example, it would make more sense to address data privacy and driverless car issues in separate legislation, and plenty of broad, bipartisan support exists for measures in those fields. Efforts to incorporate those issues and everything else in one mega-bill are a recipe for legislative failure.

Recommended reading: Artificial Intelligence Legislative Outlook: Fall 2023 Update

4) When considering AI policies, focus on the outputs of algorithmic systems instead of the inputs into them. Regardless of how AI policy takes shape, it is crucial that lawmakers make it clear that the focus of regulation will be on algorithmic outputs or outcomes, not inputs or processes. Rep. Obernolte has repeatedly stressed this point in recent talks and essays when explaining why it is essential that policymakers avoid AI mandates “that stifle innovation by focusing on mechanisms instead of on outcomes.” In other words, policy should focus on how AI technologies perform and whether they do so in a generally safe manner. Too much of the AI policy discussion today is instead focused on hypothetical, worst-case scenarios pulled straight from the dystopian plots of science fiction stories. Those fear-based narratives then prompt calls for preemptive regulation of computational processes and treat AI innovations as guilty until proven innocent. That is the wrong standard for AI policy. As Rep. Obernolte correctly recommends, we should focus on real-world outputs and outcomes and then judge them accordingly.

Recommended reading: The Most Important Principle for AI Regulation

5) Ensure that AI is being defined in a sensible and consistent fashion, and avoid efforts that require all algorithmic systems to be “explainable.” Perhaps the greatest of all policy challenges relating to AI is simply defining the term. “There is no single universally accepted definition of AI, but rather differing definitions and taxonomies,” a U.S. Government Accountability Office report noted in 2018. There may be no escaping this definitional dilemma, but lawmakers must do their best to ensure that definitions are as clear and consistent as possible. Because AI concerns and regulations will vary widely by sector and context, the easiest way to avoid definitional confusion is to keep AI policy focused on targeted, sectoral approaches. It is equally important that lawmakers not demand that all AI systems be perfectly “explainable” in terms of how they operate. Algorithmic models are highly complex, and it would be virtually impossible to explain everything that went into creating them or explaining how they reasoned their way to certain conclusions. Again, this is why it is important to focus on real-world outcomes instead of underlying processes.

Recommended reading: Comments of the R Street Institute to the National Telecommunications and Information Administration (NTIA) on AI Accountability Policy

6) Examine how to preempt state and local government AI regulations that would impede the development of a robust national marketplace in algorithmic systems. While Congress is moving slowly on AI policy, many states and localities are aggressively advancing new AI proposals. Some parochial AI regulation will obviously be enacted, but a confusing patchwork of inconsistent state and local mandates would encumber algorithmic innovation across the nation. While Congress cannot completely preempt all state and local AI regulatory activity, it can set some guidelines on what types of regulation might impinge upon interstate commerce or speech. Again, this is why breaking AI policy into smaller chunks makes even more sense; in some areas (like autonomous vehicles, drones, or intellectual property) preemption may be easier than others (such as insurance markets, policing issues, or education policy).

Recommended reading: State and local meddling threatens to undermine the AI revolution

7) Ensure continued liability protections for online speech and commerce by preserving Section 230 of the Telecommunications Act of 1996. This provision has played an essential role in protecting digital speech and commerce and has allowed the internet to flourish without fear of frivolous lawsuits being filed at every juncture. It remains unclear whether Section 230 as written applies to generative AI systems, but a good case can be made that it should. Unfortunately, some lawmakers have been trying to gut Section 230—or at least make sure it does not extend to next-generation digital systems. The new AI task force should carefully consider how such a move would undermine algorithmic innovation by unleashing a flood of litigation against algorithmic innovators. Optimally, Section 230 protections would be extended to cover new AI systems, but, at a minimum, lawmakers must ensure that the law is preserved for existing speech and commerce.

Recommended reading: Without Section 230 Protections, Generative AI Innovation Will Be Decimated

8) Provide more resources for anti-fraud efforts or criminal enforcement involving algorithmic capabilities where needed. The most important AI enforcement tools will likely be existing consumer protections, anti-fraud laws, and other mechanisms for the recall authority possessed by some federal regulatory agencies. In most cases, these agencies already have the right tools to address algorithmic problems that could develop, but they might need additional resources or training to address them more effectively. The new AI task force can identify where those gaps might exist.

9) Ensure an open door for global AI firms, talent, and investment. Online commerce exploded in the United States because the nation opened its doors to skilled immigrants and global investors who were eager to come here to enjoy the benefits of vibrant markets, world-class higher education institutions, and research facilities. In essence, the United States attracted the world’s best and brightest away from other nations by simply providing boundless opportunities through the general freedom to innovate. America needs to double down on that winning approach. The Biden administration’s recent AI executive order included some sensible steps to improve skilled immigration policy, and the new AI task force should explore how to follow through on efforts to attract even more talent and investment to our shores.

10) Carefully evaluate the strategic importance of AI systems and their role in enhancing America’s national security and geopolitical standing. It is now widely accepted that AI has important ramifications not only for global competitiveness, but also for national security and cybersecurity. An earlier task force, the National Security Commission on Artificial Intelligence, produced a major 2021 report on these issues and concluded that “America is not prepared to defend or compete in the AI era.” The new AI task force should explore how a strong digital technology base is an important source of both national prosperity and security. It is essential that the United States be a leader in AI to counter China and other nations attempting to overtake U.S. innovation in next-generation computational systems. It is especially important for American companies to once again lead this technological revolution to ensure that our values can shape information technology platforms and markets going forward. This is why our nation must get AI policy right and not shoot itself in the foot as the next great technological race gets underway with China and the rest of the world.

Recommended reading: Existential Risks & Global Governance Issues around AI & Robotics