Five Promising Cybersecurity Measures from the First-Ever International AI Treaty
International efforts to establish artificial intelligence (AI) governance have been steady but cautious, with collective actions like the first global AI Safety Summit in 2023 laying the groundwork with key principles for mitigating AI risks. On Sept. 5, 2024, the United States signed the Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law, which introduces a risk-based governance approach that strives to harness AI’s benefits while ensuring the protection of human rights, democratic principles, and the rule of law.
This landmark agreement is the first-ever binding international AI treaty, establishing a universal legal standard that guides the responsible development and use of AI systems throughout their lifecycle. While its ratification timeline remains unclear, the treaty offers a balanced approach that fosters global consensus without imposing overly restrictive rules that could stifle innovation. The treaty also recognizes both the opportunities and risks of AI research and development. Furthermore, this treaty aims to complement ongoing domestic AI governance efforts, allowing countries to maintain and build upon their national regulations. Notably, the treaty exempts AI systems related to national defense from its obligations, provided such activities adhere to applicable international human rights law and respect democratic institutions and processes.
Among the treaty’s provisions are five promising cybersecurity measures designed to promote responsible AI: transparency and oversight, personal data protection and privacy, reliability and security, safe innovation, and risk and impact management.
1. Transparency and Oversight
As outlined in Article 8, the treaty mandates that AI-generated content be clearly identifiable and that decision-making processes within AI systems are understandable and accessible. These transparency and oversight requirements are critical for both AI safety and cybersecurity because they allow key stakeholders, such as developers and system administrators, to scrutinize AI behavior for potential vulnerabilities, anomalies, biases, and errors.
For example, in the banking industry, AI is increasingly used to detect fraud. Without transparency and oversight, banking customers could have transactions blocked without understanding why they were initially flagged. Transparency and oversight ensure that system administrators can explain how AI decisions are made, allowing them to identify and correct biases or errors that could otherwise cause harm. Moreover, by requiring AI-generated content to be labeled, greater transparency and oversight can help reduce the risk of successful phishing attacks, as users and administrators can more easily identify and block suspicious content intended to mimic legitimate communications. By emphasizing transparency and oversight, the treaty promotes global norms of increased openness and accountability, making this a promising development for enhancing cybersecurity and trust in AI systems.
2. Personal Data Protection and Privacy
Article 11 of the treaty calls on signatories to implement effective safeguards for personal data and to ensure that AI systems comply with domestic and international data protection laws. This measure ensures that AI systems align with existing privacy frameworks and aims to mitigate the potential of unauthorized access to personal data or misuse.
Personal data protection and privacy play pivotal roles in AI security. AI-powered chatbots, for example, rely on hundreds of gigabytes of collected data for model training and refinement. Without clear data protection and privacy measures, this data could be vulnerable to cyberattacks resulting in unauthorized access or misuse.
While this provision emphasizes the importance of personal data protection and privacy, it leaves the implementation of safeguards to each signatory’s existing legal frameworks. Although the treaty’s flexibility is a strength because it allows nations to tailor their approach, it can also present challenges if a signatory does not have an existing domestic data protection law. In the United States, for example, the absence of a comprehensive federal data privacy law could create gaps in personal data protection and privacy, underscoring the need for clearer guidance on how to address such disparities.
As AI-powered chatbots become more ubiquitous across various businesses and industries, ensuring that AI systems adhere to strong data protection and privacy standards not only protects individual privacy, but also prevents larger-scale AI security risks like data leaks, data theft, or model manipulation. In this way, the treaty’s emphasis on data protection and privacy reflects progress and a growing global consensus on the importance of securing personal information throughout the AI lifecycle.
3. Reliability and Security
Article 12 encourages signatories to adopt measures that promote the reliability of AI systems and trust in their outputs. A promising cybersecurity development, this focus reflects a clear and shared understanding of the interconnected nature of reliability, security, and trust in AI systems.
The National Institute of Standards and Technology defines reliability as ensuring that a system operates as intended. A reliable AI-powered chatbot, for instance, should consistently and accurately detect prompts that violate its terms of service and adjust its responses accordingly, ensuring that all of its conversations align with the system’s original intended use. By continuously identifying and appropriately responding to potentially harmful inputs, the AI-powered chatbot’s reliability helps reduce the risk of it being manipulated to generate harmful content or reveal vulnerabilities that cybercriminals could exploit.
Furthermore, this provision plays a crucial role in establishing standardized definitions for key terms in AI research and development. By fostering greater international understanding and agreement on these definitions, Article 12 reduces ambiguity and helps ensure consistency in how AI safety and security measures are applied across different countries. This shared understanding strengthens the overall cybersecurity posture of AI systems and use and facilitates greater user trust by maintaining predictable interactions.
4. Safe Innovation
Article 13 of the treaty emphasizes the need to foster innovation while safeguarding human rights, democracy, and the rule of law. It strikes this balance by calling on signatories to establish controlled environments for the development, experimentation, and testing of AI systems under the oversight of designated authorities.
“Sandboxes,” or controlled environments, have long been essential in cybersecurity, allowing organizations to rigorously test systems for potential cybersecurity vulnerabilities before broader deployment. “AI sandboxes” would enable organizations to simulate cyber threats and stress-test AI systems in a secure, monitored setting. This proactive approach allows organizations to identify risks early and address them before exposing the public to potential safety harms and cybersecurity threats. If effectively implemented, the controlled environments outlined in Article 13 would empower organizations to innovate freely while upholding high standards of safety and security, ensuring that AI’s benefits are realized without compromising human rights, democratic principles, or the rule of law.
The inclusion of safe innovation in the treaty is a significant cybersecurity development because it underscores the fact that innovation and safety are not mutually exclusive goals, but rather complementary priorities.
5. Risk and Impact Management
Finally, in Article 16, the treaty encourages signatories to adopt or maintain comprehensive frameworks for risk assessment and impact management at every stage of the AI lifecycle. This holistic approach, which requires continuous identification, evaluation, and mitigation of risks, ensures AI systems are deployed with careful consideration of the potential harms to human rights, democratic principles, and the rule of law.
The measure’s risk-based approach aligns well with both cybersecurity and AI governance principles. Emphasizing the need for iterative risk management, Article 16 promotes proactive AI risk management by encouraging developers and researchers to consider the severity and probability of potential impacts before they have an opportunity to occur. Moreover, by adjusting risk mitigation strategies based on the context, severity, and intended use of an AI system, this measure equips organizations to adapt their defenses and improve their responses to emerging threats.
Initial Reception and Future Outlook
The United States is among the first 10 countries to sign the treaty, joining other initial signatories including Andorra, Georgia, Iceland, Israel, Moldova, Norway, San Marino, the United Kingdom, and the European Union. Initial reactions to the treaty have been largely positive, with experts commending its progress in advancing international AI governance. Many researchers have lauded its strong foundation in human rights law and its balanced approach to AI innovation, equality, individual autonomy, and privacy throughout the AI lifecycle.
The treaty’s comprehensive and forward-thinking approach builds upon ongoing research and policymaking efforts by embedding key cybersecurity considerations into multiple AI governance standards that signatories are encouraged to adopt. In doing so, the treaty recognizes cybersecurity as a foundational pillar of responsible AI. However, despite these promising cybersecurity developments, the treaty’s true efficacy will depend on the strength of its implementation and enforcement. Variability in how signatories interpret and operationalize key concepts, such as safety, security, democratic principles, and innovation, could lead to inconsistencies in outcome. Though an accompanying explanatory report was released with the treaty to help clarify definitions and provide additional context for each provision’s objectives, a follow-up report could extend this effort by defining clearer benchmarks for implementation. The recent “Governing AI for Humanity” report from the United Nations exemplifies how such reports can enhance global AI governance efforts by aligning priorities and fostering coordination on shared risks.
In the United States, the State Department’s Bureau of Democracy, Human Rights, and Labor highlighted how the treaty aligns with existing U.S. policies, including guidelines for AI governance, innovation, and risk management from the White House. Moving forward, the treaty must be submitted to the U.S. Senate for ratification. Though the Biden administration has not yet announced a timeline for this process, the treaty, once ratified, would not only provide an opportunity for global collaboration on innovation, risk management, and human rights protections, but also solidify U.S. leadership in advancing responsible AI governance.