Artificial Intelligence Legislative Outlook: Fall 2023 Update
While artificial intelligence (AI) was barely on Washington’s radar screen a year ago, it has quickly become one of the hottest tech policy issues in Congress. A considerable amount of legislative activity is now underway, with lawmakers and committees jockeying for position in an effort to advance wide-ranging AI policy frameworks.
Senate Majority Leader Chuck Schumer (D-N.Y.) has even taken a personal interest in moving legislation through a series of new “AI Insight Forums,” which he originally proposed in a June speech. This effort got underway formally on Sept. 13 with a closed-door listening session that included top tech CEOs, the heads of various special interest groups and various academic regulatory advocates.
Going much further, Sens. Richard Blumenthal (D-Conn.) and Josh Hawley (R-Mo.), who serve as the chair and ranking member of the Judiciary Subcommittee on Privacy, Technology, & the Law, respectively, recently released a comprehensive regulatory framework for AI that includes a new AI-specific regulatory agency, the licensing of high-powered AI systems, expanded AI developer liability, assorted transparency requirements and many other mandates. Meanwhile, in the states, there has been a 440 percent increase in the number of AI-related bills introduced in 2022, with 191 AI-related bills already introduced this year.
Rarely has a new technology generated so much attention from policymakers and academics prior to its widespread diffusion. Whether all this interest in AI translates into concrete legislative action remains uncertain, however. While many lawmakers insist they are committed to advancing AI legislation this session, there are several impediments to getting major AI legislation over the finish line. Some of those factors include:
- Breadth and complexity = steep learning curve: It will take time for Congress to get up to speed on AI issues because of the sheer breadth and complexity of the many nuanced issues in play. Lawmakers cannot even agree on a formal definition of the term, which is unsurprising because, as the U.S. Government Accountability Office has noted, “There is no single universally accepted definition of AI, but rather differing definitions and taxonomies.” This lack of common understanding and extensive array of sub-issues that fit under the “AI” rubric greatly complicate legislative efforts.
- Many issues = many special interests: A broad array of companies, trade associations, academics, special interests, and other governmental and non-governmental organizations feel they have a stake in AI policy. Thus, they all want to be heard from as congressional conversations and hearings get underway. This abundance of voices and wide-ranging concerns make AI policymaking more complicated than other tech policy issues, which typically have a tighter focus.
- Extreme proposals = less attention for practical ideas: Many of the congressional hearings about AI thus far have been dominated by extreme fears regarding AI “superintelligence” and have sometimes included references to dystopian narratives pulled from the plots of sci-fi shows and movies. Unsurprisingly, this has led to calls for far-reaching controls, such as the ideas in the Blumenthal-Hawley proposal. Because extreme rhetoric and calls for sweeping regulations generate considerable media attention, it leaves less time for more practical proposals that could actually have a chance of successful implementation.
These factors will make it harder for lawmakers to advance broad-based AI regulation in the near term. If lawmakers want to get something done in this session, they should do two things:
- Congress should set aside the most radical regulatory proposals. Massive new technocratic bureaucracies are a non-starter, for example. America did not have a Federal Internet Agency or National Software Bureau for the digital revolution, and it does not need a Department of AI now. Policymakers should first take advantage of the extensive array of existing regulations being enforced by the federal government’s 434 different existing departments. Similarly, proposals to adopt sweeping AI licensing schemes and other computational controls are counterproductive. Lawmakers should instead agree to study such ideas and consider them only as a last resort if other options or remedies fail.
- Congress should break AI policy down into its smaller subcomponents and then prioritize among them where policy gaps might exist. “If we try to overreach, we may come up with goose eggs,” Sen. Mark Warner (D-Va.) told Politico on a recent podcast about the prospects for AI legislation. As Sen. Warner suggests, there is likely an inverse relation between the ambitious nature of AI proposals and the possibility that anything advances in Congress this session. Thus, to balance innovation and safety—and ensure that rules keep pace with rapid and unexpected technological change—America needs a modular, targeted and incremental approach to AI policy that is rooted in flexibility, agility and adaptability.
If Congress fails to adopt this approach, then lawmakers run the risk of being a non-actor on AI policy while the executive branch and the states advance their own agendas.
AI Legislative Measures that Might Be Achievable
The good news is that there are some proposed frameworks or measures that satisfy the two tests outlined above by avoiding radical, unworkable schemes and focusing on more targeted, tractable objectives.
Some of these efforts focus on studying AI governance more before taking further action. For example, Reps. Ted W. Lieu (D-Calif.), Ken Buck (R-Colo.) and Anna Eshoo (D-Calif.) introduced the “National AI Commission Act” to create an expert body to consider how AI might be regulated and examine “the capacity of agencies to address challenges relating to such oversight and regulation.”
Similarly, Sen. Michael Bennet (D-Colo.) has introduced the “Assuring Safe, Secure, Ethical, and Stable Systems for AI Act” (ASSESS AI Act), which would create a task force to “assess existing policy, regulatory, and legal gaps for artificial intelligence” and “make recommendations to Congress and the President for legislative and regulatory reforms to ensure that uses of artificial intelligence and associated data in Federal Government operations comport with freedom of expression, equal protection, privacy, civil liberties, civil rights, and due process.”
Other measures look to “ensure interagency coordination regarding Federal artificial intelligence activities,” as would be done through the “AI Leadership to Enable Accountable Deployment Act” (AI LEAD Act). The bill, sponsored by Sens. Gary Peters (D-Mich.) and John Cornyn (R-Texas), creates a new “Chief Artificial Intelligence Officer” and “Artificial Intelligence Governance Board” within federal agencies to achieve better cross-agency cooperation and coordination.
Frameworks that Tap Existing Authority
Other proposals sketch out less restrictive regulatory approaches for AI that tap existing agency regulatory authority or utilize non-binding governance frameworks.
Sen. Bill Cassidy (R-La.), ranking member of the Senate Health, Education, Labor, and Pensions Committee, recently released a white paper that discusses the benefits and risks of AI for the workforce, educational system and health care sector. “A sweeping, one-size-fits-all approach for regulating AI will not work and will stifle, not foster, innovation,” he said. “Instead, we need robust, flexible frameworks that protect against mission-critical risks and create pathways for new innovation to reach consumers.” Instead of proposing a single bill to address AI issues, Cassidy’s framework focuses on the role Congress and agencies can play by using traditional laws and regulations.
A new bill being floated by Sens. John Thune (R-S.D.) and Amy Klobuchar (D-Minn.) would build on the multistakeholder approach to AI risk standards developed by the National Institute of Standards and Technology (NIST). Their “AI Research, Innovation and Accountability Act” would meld self-regulatory mechanisms and some limited government enforcement of AI safety standards. It would instruct NIST to carry out research to facilitate the development of self-certification standards, risk assessments and testing processes specified by the Department of Commerce. While there are many details to work out here, this is a more risk-based approach to AI policy that tries to develop more targeted rules and lean on more flexible certification systems to balance innovation and safety.
Investment-oriented or Educational-focused Efforts
Some new legislative measures propose increasing federal investments in computational systems or standard-setting initiatives, while others focus on deploying AI systems and talent throughout the government. Members of the bipartisan Congressional AI Caucus have floated the “Creating Resources for Every American To Experiment with Artificial Intelligence Act” (CREATE AI Act), which would create the National Artificial Intelligence Research Resource (NAIRR). The NAIRR would be a cloud-computing resource to help democratize AI use and development by providing low-cost access to computing resources for researchers in many different fields. The body is also intended to serve as a testbed for development of AI best practices.
Sen. Maria Cantwell (D-Wash.) has discussed the idea of creating a “GI Bill for AI” to help retrain workers affected by AI disruptions and has introduced the “FUTURE of Artificial Intelligence Act,” which would create a federal advisory committee to examine the economic opportunities and impacts of AI. Similarly, Rep. Carolyn Maloney (D-N.Y.) and Sens. Peters and Mike Braun (R-Ind.) have proposed the “AI Training Act” to train and upskill federal workers so that they better understand AI and its applications.
AI education and literacy is another issue where targeted legislation could have a better chance of advancing. The “Artificial Intelligence Literacy Act of 2023,” soon to be introduced by Rep. Lisa Blunt Rochester (D-Del.), would amend the Digital Equity Act of 2021 to fund AI literacy initiatives at all education levels to help the public better understand how to safely use AI tools and understand AI-enabled technologies. It would include support to create labs that provide students with hands-on AI learning experiences. This is a measure that should enjoy widespread support, except that lawmakers appear more focused on regulatory proposals.
Other Highly Focused or Issue-specific Proposals
Many targeted AI-related proposals have been introduced that would focus on how AI systems affect specific values or concerns.
Some of these bills address critical government tasks, such as the interplay of AI and biosecurity (“Artificial Intelligence and Biosecurity Risk Assessment Act”) or nuclear launch capabilities (“Block Nuclear Launch by Autonomous Artificial Intelligence Act”). More broadly, the “Transparent Automated Governance Act (TAG Act)” requires that government agencies disclose how they might utilize AI “when using certain automated systems and augmented critical decision processes.” Another measure addresses public health preparedness and response to AI threats (“Strategy for Public Health Preparedness and Response to Artificial Intelligence Threats”).
Meanwhile, concerns about the role of AI in elections have already led to many hearings and proposals like the “Protect Elections from Deceptive AI Act” and the “REAL Political Ads Act,” which have attracted bipartisan support. Those bills and others such as the “AI Labeling Act,” the “AI Disclosure Act” and the “Candidate Voice Fraud Prohibition Act” would demand varying degrees of transparency or regulation of AI use in political advertising. Other targeted concerns about so-called “predictive policing” algorithmic applications or the use of facial recognition tools by law enforcement bodies could also be ripe for more focused investigation and legislation.
There are potential free speech concerns or other issues surrounding some of the efforts mentioned here, but these more targeted measures have a better chance of advancing than broad-based, top-down AI regulatory measures.
AI Focus is Crowding Out Other Tech Priorities
Another reason for Congress to adopt a more pragmatic and incremental approach to AI policy would be to allow other tech policy priorities to move. Washington’s swelling interest in AI policy has crowded out some other important legislative proposals that might have advanced otherwise, and still could. For example, the American Data Privacy and Protection Act, a baseline data-handling measure, was advancing in the last session of Congress, and supporters had hoped to get it finalized this session. But the measure now faces competition from AI policy and risks being derailed as a result, even though many analysts argue that federal data protection policies should be implemented before AI regulation is considered. Meanwhile, state governments continue to implement a patchwork of different privacy and data protection laws in the absence of a federal framework.
Congress also appears unable to move driverless car legislation despite bipartisan support and general agreement that a national framework for some basic autonomous vehicle (AV) guidelines would make sense. As with data privacy, states are also advancing their own AV laws to fill the policy vacuum left by Congress’s inability to act. There are special interest obstacles to federal AV legislation, but the growing congressional focus on broad-based AI regulation makes targeted AV legislation even less likely now.
The focus on sweeping AI measures also distracts committees from the important oversight role they should be exercising over the many federal agencies pursuing targeted algorithmic regulations in their respective fields. The administrative state is where the real AI policy action is happening in the short term, yet Congress is largely ignoring it.
As 2023 comes to a close and another election cycle heats up, Congress will have to make tough choices regarding what is feasible on the AI policy front. The more sweeping and comprehensive regulatory proposals will need to be put aside if lawmakers hope to avoid the legislative “goose eggs” scenario that would leave Congress on the sidelines as federal agencies, the states and international regulators move aggressively to regulate AI.