S.3205/H.R. 6936 – Federal Artificial Intelligence Risk Management Act of 2023/2024

Bill Summary

Federal Artificial Intelligence Risk Management Act of 2023/2024 (S.3205 / H.R. 6936). Establishes guidelines to be used within the federal government to mitigate risks associated with artificial intelligence (AI). Referred to Senate Committee on Homeland Security and Governmental Affairs/House Committee on Oversight and Accountability, and House Committee on Science, Space, and Technology.

Cybersecurity Score Rating

Rating: Cyber positive. This bill has the potential to improve the safety and security of AI technologies deployed within the federal government. (Last updated: Feb. 22, 2024)

Key Provisions

  • Requires the Office of Management and Budget (OMB) to direct federal agencies to adopt the Artificial Intelligence Risk Management Framework (RMF) developed by the National Institute of Standards and Technology (NIST) regarding the use of AI
  • Specifies appropriate cybersecurity strategies and the installation of effective cybersecurity tools to improve the security of AI systems
  • Establishes an initiative to deepen AI expertise among the federal workforce
  • Ensures that federal agencies procure AI systems that comply with the framework
  • Requires NIST to develop sufficient test, evaluation, verification, and validation capabilities for AI acquisitions

Background

Federal agencies employ AI systems for a range of purposes, from addressing cybersecurity vulnerabilities to automating redundant processes to improving health care outcomes. However, with the adoption of novel technology and no universally enforced standards for its safety and security, the federal government’s use of that technology is susceptible to challenges and risks, including:

  • How to best mitigate data privacy and security risks associated with data collected and processed on Americans;
  • How to address challenges associated with the lack of transparency about AI decision-making; and
  • Reducing or eliminating potential negative outcomes as a result of the use of untrue or unverified data.

In 2023, NIST released its first iteration of the AI RMF, a set of voluntary best practices that individuals, organizations, and society can use to better manage risks associated with AI. The RMF has two primary components. The first frames AI risks and discusses the characteristics of trustworthy AI systems: valid and reliable; safe; secure and resilient; accountable and transparent; explainable and interpretable; privacy-enhanced; and fair with their harmful bias managed. The second component describes four specific functions to address the risk of AI systems. The RMF has been praised for being a “rights-preserving, non-sector specific,” and adaptable framework for all types and sizes of organizations; the framework is also interoperable with international standards.

Given the opaqueness of some AI systems and the potential inconsistencies in outputs, risks posed by AI are unique. The NIST AI RMF provides a structured methodology for ensuring that organizations can formulate internal processes and tools to address risks that have the potential to introduce harm. President Joe Biden’s 2023 Executive Order (EO) 14110 on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence sought to incorporate the AI RMF into federal agencies’ guidelines and best practices (sections 4.1(a)(i)(A) and  4.3 (a)(iii)), and to promote the AI RMF as a worthy global technical standard (sections 11(b) and 11(c)).

Rating: Cyber Positive

Key Takeaways

A legislative approach can encapsulate and give statutory support to some of the directives outlined in President Biden’s EO and avoid some typical pitfalls of EOs (e.g., the risk of a future administration rescinding components (or the entirety) of the EO, or executive branch overreach concerns). These bills being a bipartisan, bicameral effort indicates that there is broad consensus around its merits and that political will exists for its passage. It would also mark one of the first times where adoption or use of NIST frameworks would be required for the federal government and private sector vendors. In particular, these bills would have a number of improvements for AI security and cybersecurity, including:

  • Suppliers attesting compliance to the RMF in order to be eligible for a federal AI contract award;
  • Raising public sector resilience against AI misuse and risks and improving harmonization of technical and security standards across federal agencies; and
  • Consistent engagement, review, and updating of standards for the test, evaluation, verification, and validation of AI acquisitions.

Cybersecurity Analysis


Recommendations

With the concerns highlighted in our analysis, we offer the following recommendation in an aim to mitigate challenges and reduce risks. 

Read the full explainer here.

Get the latest cybersecurity policy right in your inbox. Sign up for the R Street newsletter today.