The White House’s Oct. 30 Executive Order (EO) on “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” directs a wide range of actions, with safety and security at the fore. Some experts believe artificial intelligence (AI) presents a “risk of extinction,” while others think it has the potential to improve cybersecurity and national security. Either way, the Biden administration and other stakeholders have focused on cybersecurity—and safety, more broadly—for domestic AI development, usage and innovation. 

The path forward is clear: the administration, Congress and the private sector should account for and encourage AI’s beneficial cybersecurity-related uses while addressing risks in an even-handed manner.

Five aspects of the EO are notable for cybersecurity in particular:

  1. Protection of critical dual-use technologies

    Developers of AI foundation models with potential dual-use applications (defined within the EO as highly complex and trained models with “the potential to pose a serious risk to security, national economic security, national public health or safety”) would have to provide reports to the U.S. Department of Commerce on how they are protecting their technology from malicious threat actors (Section 4.2(a)(i)).The EO also requires developers of models with dual-use potential to share information with the government using Defense Production Act (DPA) authority. The DPA is a Cold War-era law that gives the president authority to control industry during emergencies (Section 4.2(a)). R Street has previously expressed concerns about the misuse of this law, and such use must be carefully scrutinized moving forward. It is imperative to clarify which developers must report and what penalties they will face for noncompliance, as well as how information disclosed might be used against the entity, who has access to that information and how it is secured. While the EO’s intent is to harden these organizations to cyber risks, the federal government itself has experienced numerous lapses and must prevent further security risks and harm by ensuring data does not end up in the wrong hands.
  2. Oversight on sensitive technology

    Multiple agencies will be involved in the oversight of sensitive AI technologies. For example, cloud infrastructure organizations will be required to report to the Department of Commerce when a foreign entity transacts with them to train large AI models that could be used in malicious cyber activity. A definition of what constitutes leveraging a cloud service to conduct malicious activity would be needed, and scalability concerns need to be investigated. Among numerous other private-sector obligations cited in the EO, implementation must ensure that oversight and compliance considerations would not impede companies’ ability to innovate.
  3. Additional standards and tools

    The National Institute of Standards and Technology (NIST) is to develop red-team testing standards (Section 4.2(a)(i)(C)) to help identify and exploit potential vulnerabilities in AI systems. NIST is a proven leader in this space, with the voluntary AI Risk Management Framework built on their prior privacy and cybersecurity frameworks. Additional standards and guidelines can serve as a resource for both government and industry when implemented in a collaborative and voluntary manner.
  4. Use of AI to improve cyber resiliency

    Malicious actors, whether nation-state actors or criminal groups, will leverage AI to carry out their nefarious goals. It is critical for the United States to be at the forefront of AI research and development in order to stay a step ahead of threats. The Department of Homeland Security (DHS) and the Cybersecurity and Infrastructure Security Agency will assess cross-sector risks and investigate how to mitigate these vulnerabilities (Section 4.3 (a)(i)). The DHS will also establish an Artificial Intelligence Safety and Security Board to advise critical infrastructure sectors on improving their cybersecurity posture (Section 4.3 (a)(v)). The EO stipulates that the DHS and the Department of Defense (DOD) will develop, test and evaluate methods for leveraging AI technologies to find vulnerabilities on government networks and systems (Section 4.3(b)(ii)). These initiatives could also provide best practices for private-sector organizations to better protect themselves from cyberattacks.
  5. National security nexus

    The EO directs the development of an interagency-led National Security Memorandum (Section 4.8) to ensure the U.S. military and intelligence communities use AI safely and effectively and take action to counter adversary use. The DOD was active in AI well before the current buzz around generative AI emerged earlier this year, hosting a data and AI symposium, deploying specific-use cases across the department and more. These efforts and others should continue.

Attention to AI’s safety and security aspects is not new. The Biden administration’s meetings with industry have resulted in voluntary commitments centered around broad goals: ensuring products are safe before release, developing products while prioritizing security and earning public trust. Specific steps included committing to internal and external security testing, sharing information like best practices for safety, and publicly reporting a system’s capabilities and limitations. This approach appropriately leveraged the expertise of those on the front lines of AI innovation.

Meanwhile, the EO attempts to reassert U.S. leadership in this arena, as other countries and blocs around the world have made progress in our absence. The administration’s EO release also comes in advance of the United Kingdom’s AI Safety Summit this week, which will focus on the risks of AI— especially at the frontier of development—and how to mitigate them. Industry, including the likes of Google and Microsoft, has also embraced self-imposed security frameworks. 

Several aspects of this directive make us wary. The EO has a bold timeline, with many of its directives requiring implementation in 90 to 240 days. As the order undergoes scrutiny by agencies, Congress and other stakeholders, it is imperative that major decisions are not made hastily and that members of Congress, industry and civil society are included in deliberations. It is also important that regulatory bodies and agencies do not interpret this order as a blank check, which could subject our nation’s security to additional delays, lawsuits and other challenges. 

With the introduction of this EO, we hope to see the United States continue to participate meaningfully in the global discourse on AI. We expect that subsequent policy efforts will avoid harming innovation and creating unnecessary bureaucracy and that any further legislative or regulatory action will account for the immense opportunities AI offers to cybersecurity.

Stay in the know. Get the latest Cybersecurity Policy in your inbox.