This article is part of a series of written products inspired by discussions from the R Street Institute’s Cybersecurity and Artificial Intelligence Working Group sessions. Additional insights and perspectives from this series are accessible here.

Despite the enthusiasm surrounding AI’s ability to enhance cybersecurity, there is a persistent sense of uncertainty over how to harness the technology responsibly. A robust understanding of the AI risk landscape can help us strike a balance between our adoption of AI and risk mitigation, allowing us to make informed decisions about how to best integrate AI into our organizations and craft flexible solutions, regulations, and guidelines for AI’s development and use.

We organize AI risks into five key categories, examining each through the lens of cybersecurity use cases: data misuse and privacy violations; responsible AI risks; security exploitations; unintended technical misuses; and AI safety harms. Our analysis illuminates the interdependent nature of these categories, providing a comprehensive overview of the AI risk landscape.

1. Data Misuse and Privacy Violations
AI systems rely on data. Whether structured or unstructured, data plays a critical role in the process of training both supervised and unsupervised AI models, allowing them to learn patterns, make predictions, and provide intelligent outputs to users. This data can be collected by web scraping publicly available data on the internet, tracking user interactions within websites or applications, or buying access to existing databases.

However, data misuse or corruption can severely compromise the integrity of an AI model’s training process, resulting in biased, inaccurate, or unreliable AI systems. Unauthorized or unethical data usage not only exposes organizations to potential legal ramifications but also amplifies concerns about data ethics, privacy, copyright protections, and preserving user trust. For example, an AI-powered chatbot may misuse personal data collected from user conversations to train new AI models without consent, potentially infringing on user privacy and leading to copyright violations. Another concerning application emerges when data, particularly from sensitive populations like the military, is collected and fed into an AI model for training and advancement. Such actions can pose significant risks to national security, underscoring the importance of transparent and ethical data usage.

Some companies responsible for deploying generative AI tools are already entangled in legal disputes due to their extensive web scraping practices, which include potential unauthorized access to and use of personally identifiable information (PII). Beyond these immediate legal, ethical, and privacy implications, data misuse can impede the development of responsible AI systems. To manage these risks, policymakers must continue to prioritize data (and the adequate protection of that data) as a focal point in AI regulatory and governance efforts.  

2. Responsible AI (RAI) Issues
Responsible Artificial Intelligence (RAI) encompasses ongoing efforts to align AI development and deployment with legal and ethical guidelines, emphasizing values like privacy, accountability, fairness, transparency, and sustainability, among others. RAI is essential because it guards against issues in which AI failures may produce harmful content, such as promoting self-harm or inciting violence. These AI failures can result in far-reaching consequences like the amplification of mis- and disinformation campaigns.

Deepfakes, maliciously manipulated media that convincingly impersonate individuals, present another issue within RAI. Because deepfakes can disseminate false information, they erode trust in digital data and media. This confusion creates opportunities for cybercriminals to manipulate individuals and organizations through sophisticated social engineering attacks, potentially leading to cyber incidents like data breaches.

Another critical issue in RAI is biased results and recommendations. When AI algorithms perpetuate biases that are present in their training data, they can produce discriminatory outcomes. For example, a drugstore chain that relied on AI-powered facial recognition to prevent shoplifting was recently sued after customers alleged that the technology racially profiled them. This case serves as a stark illustration of the importance of RAI, emphasizing the need for fairness and accountability in AI development and deployment.

3. Security Exploitations
In the landscape of AI misuse, cybersecurity concerns can be divided into traditional and novel risks and opportunities for exploitation. Traditional cybersecurity protects on-premise systems from threats like malware, phishing, and network vulnerabilities, among others. In AI misuse scenarios, traditional cybersecurity concerns may manifest when machine learning (ML) developers overlook essential cybersecurity practices, such as securing their application programming interfaces (APIs), managing user access privileges, or inadvertently exposing sensitive credentials within code repositories. These oversights can create opportunities for cyber adversaries to exploit software vulnerabilities. Moreover, conventional social-engineering cybersecurity threats like phishing can also be enhanced with AI, making it easier for cyber attackers to execute and more difficult for cyber defenders to block and detect.

In contrast, novel cybersecurity addresses emerging attack vectors that extend beyond on-premise systems, like cloud environments and AI systems. Adversarial machine learning (AML) is at the forefront of novel AI cybersecurity efforts. For instance, attackers can manipulate large-language models (LLMs) by using deceptive prompts, confusing them to inadvertently reveal its own training data and model parameters. Data leaks and prompt injections both serve as examples of how AML poses a critical challenge within AI security, driving the need for proactive measures and defenses.

Recognizing both traditional and novel cybersecurity concerns is essential for strengthening AI system security and crafting adaptive cybersecurity solutions that proactively counter emerging threats.

4. Unintended Technical Misuses and Harms
Unintended technical misuse and harms refer to situations in which AI systems are used in ways that they were not originally intended, leading to negative consequences like privacy violations or unexpected outputs.

User error, such as accidentally sharing confidential information with AI systems, is one way that unintended technical misuse can occur. This often occurs due to users lacking guidance, security awareness, or a complete understanding of AI capabilities and limitations. For example, users may inadvertently share sensitive information about their organization’s strategic plan with AI systems to revise it, assuming it is safe to do so but potentially leading to data breaches, leaks, or privacy violations. This is a significant AI and cybersecurity risk because it increases the potential for personal or corporate data compromises.

Moreover, while AI systems are advanced, they are not immune to errors and may produce hallucinatory or unintended outputs due to data biases or unforeseen interactions. These outputs have the potential to mislead or cause harm, highlighting the need for rigorous AI testing and for users to approach AI outputs with a cautious and critical mindset.  

5. AI Safety Harms
AI Safety encompasses a category of concerns related to the ethical implications and broader societal impacts of AI applications. While AI Safety definitions and frameworks vary, the weaponization of AI systems and rogue AIs are two issues among many concerns included in this category of AI risks.

The weaponization of AI systems involves the deployment of AI for destructive purposes. For example, nation-state actors could exploit AI systems initially designed for nuclear security, elevating the threat of nuclear disasters. These breaches in AI safety can trigger global security crises with far-reaching, irreversible consequences. Furthermore, the accessibility of AI systems raises concerns about how determined researchers or even mobilized amateurs may weaponize them for other nefarious purposes, ranging from cyberattacks against enterprise systems to disinformation campaigns.

Rogue AIs—AI systems that exceed their programmed parameters and make autonomous decisions without human control or oversight—represent another concern within AI Safety. These rogue AIs could become power-seeking, resist shutdown attempts, employ deceptive strategies, and deviate from their original goals.

Although AI Safety shares core values like reliability, robustness, fairness, transparency, and accountability with RAI, it distinguishes itself by focusing on preventing broad, unintended consequences to society rather than solely on ethical and legal alignment or individual unintended technical misuse. This broader and anticipatory perspective within the landscape of AI misuses aims to ensure the safe and responsible use of AI technology at the global or societal levels.

Key Takeaways
AI, like any technology, is a tool built by people for people. Failing to capture the human element when crafting solutions can heighten risks and lead to unpredictable and unintended consequences. Policymakers can lead this effort by actively engaging with a diverse group of stakeholders, including AI developers, researchers, and affected individuals, to gain insights into the real-world impact of AI technologies. Moreover, policymakers should prioritize industry-specific AI risk assessments that focus on understanding the real-world harms that end users may experience. These initiatives can shed light on the effects of AI technologies from different perspectives, helping policymakers craft tailored and flexible solutions that effectively mitigate risks.

While grouping AI risks into different categories helps ensure that no issues are unintentionally overlooked, we must also recognize that these risks are interconnected. For instance, the misuse of data within AI systems can yield biased results and lead to disparate and potentially negative outcomes. The intertwined nature of these categories underscores the importance of adopting a holistic, people-centric approach to navigate and address the AI and cybersecurity risk landscape. As more AI models are integrated across systems and networks, the risks of emergent behavior and cascading effects are likely to increase rapidly, evolving the AI risk landscape. This people-centric approach is essential for responsibly harnessing AI’s potential, safeguarding U.S. cybersecurity defenses against evolving risks, and leading global standards in technological innovation, digital transformation, and governance.