This article is part of a series of written products inspired by discussions from the R Street Institute’s Cybersecurity-Artificial Intelligence Working Group sessions. Visit the group’s webpage for additional insights and perspectives from this series.

The rapid advancement of artificial intelligence (AI) underscores the need for a nuanced governance framework that actively engages stakeholders in defining, assessing, and managing AI risks. A comprehensive understanding of risk tolerance—which involves delineating the risks deemed acceptable in the pursuit of harnessing AI’s benefits, identifying the entities responsible for defining these risks, and clarifying the processes by which risks can be assessed and subsequently accepted or mitigated—is essential.

The exercise of assessing risk tolerance also creates the necessary space for stakeholders to question and assess the extent to which regulatory interventions are needed over less restrictive, alternative, and supplementary solutions like issuing recommendations, sharing guidance for best practices, and launching awareness campaigns. The clarity gained through this exercise also sets the stage for our assessment of three risk-based approaches to AI in cybersecurity: implementing risk-based AI frameworks; creating safeguards in AI design, development, and deployment; and advancing AI accountability by updating legal standards.

1. Implementing Risk-Based AI Frameworks

Risk-based cybersecurity frameworks provide a structured and systematic approach for organizations to identify, assess, and manage the evolving risks associated with AI systems, models, and data. The National Institute of Standards and Technology Artificial Intelligence Risk Management Framework (NIST AI RMF) is one notable example of a risk-based AI framework that builds upon established cyber and privacy frameworks to aid organizations in the responsible design, development, deployment, and use of AI systems. By outlining how AI risks differ from traditional software risks, such as the scale and complexity of AI systems, the NIST AI RMF helps organizations prepare for and navigate the evolving cybersecurity landscape with greater confidence, coordination, and precision. The voluntary nature of the NIST AI RMF also affords organizations the flexibility to tailor the framework to their specific needs and risk profiles. Congress has already taken steps to integrate the NIST AI RMF into federal agencies and AI technology procurement through its bipartisan, bicameral introduction of the Federal Artificial Intelligence Risk Management Act.

The NIST AI RMF is specifically designed for agility, which is essential for keeping pace with technological innovations and ensuring that safety and security protocols evolve in tandem with AI’s expanding role. To supplement the NIST AI RMF’s efforts, the Biden administration’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence underscores the importance of continuous improvement and adaptation in AI governance by extending its reach and robustness. Initiatives like the newly established U.S. AI Safety Institute and the AI Safety Institute Consortium are instrumental in expanding upon the NIST AI RMF’s core focus by driving the framework’s capacity for addressing safety and security challenges within the AI domain. Fostering collaboration and innovation, they exemplify the proactive steps taken to ensure the NIST AI RMF remains responsive to AI’s dynamic nature and implications.

2. Creating Safeguards in AI Development and Deployment

Safeguards ensure that AI systems operate within defined ethical, safety, and security boundaries. Some AI companies have already voluntarily committed to incorporating safeguards like rigorous internal and external security testing procedures before public release. This strategy is vital for maintaining user trust and ensuring responsible deployment and use of AI technologies.

However, acquiring the resources needed to implement these safeguards can be challenging for some organizations. Creating and implementing safeguards throughout AI development and deployment may also cause delays in achieving key innovation milestones. Furthermore, the risk of safeguards being bypassed or removed highlights a significant challenge in ensuring these protective measures are effective and enduring. These challenges require a mixture of safeguarding strategies to be leveraged and continuously evaluated and adapted to keep pace with the evolving AI technology landscape. Incorporating traditional cybersecurity principles like security-by-design and -default into AI systems can also enhance the efficacy of safeguarding strategies.

3. Advancing AI Accountability by Updating Legal Standards

The ongoing debate over AI accountability reflects the desire of some to act on legal standards that can address the complexities of AI-induced risks and incentivize stakeholders to proactively mitigate cybersecurity and safety risks. Most recently, the National Telecommunications and Information Administration released its AI Accountability Policy Report, which calls for increased transparency into AI systems and independent evaluations, among other recommendations. However, some skeptics express concerns, citing the need for balance and the potential harm that could arise if these efforts turn into a broad, top-down regulatory regime that inflicts hefty compliance and innovation costs.

Three proposed policy actions include:

While these proposed legal updates to advance AI accountability aim to have companies prioritize cybersecurity and AI safety considerations, each has drawbacks. These complexities underscore the need for continued discourse and informed decision-making among policymakers.

Conclusion
It is imperative to ensure that proposed and emerging policy actions to mitigate potential AI risks do not inadvertently stifle innovation or erode U.S. leadership in technological innovation. AI systems only exist within real-world parameters, and “when [they] go rogue, the implications are multidimensional.” To mitigate AI’s potential to impose amplified or new cybersecurity threats, policymakers should think of AI systems holistically—as technology that is inextricably linked and integrated with both disparate and overlapping ethical and legal frameworks. Incorporating risk tolerance principles into AI regulation and governance solutions is essential to ensure we are equipped to balance AI’s considerable rewards with its potential risks.