This article is part of a series of written products inspired by discussions from the R Street Institute’s Cybersecurity-Artificial Intelligence Working Group sessions. Visit the group’s webpage for additional insights and perspectives from this series.

Continued advancements in artificial intelligence (AI) present exciting new opportunities in the field of cybersecurity. These promising technologies could lead to innovative solutions in many areas, such as compressing cybersecurity incident analysis time from minutes to milliseconds and learning patterns of malicious activity on an organization’s networks. Though the technology remains nascent in many aspects, AI has consistently demonstrated its potential as an enabler of enhanced analysis, speed, and scale in cybersecurity applications.

Cybersecurity demands continuous vigilance on behalf of network defenders, creating a taxing, resource-intensive, and ever-steepening uphill battle. Although AI has the potential to improve cyber threat and risk management in many ways, it has already enhanced cyber operations in five specific areas: threat detection and incident response, vulnerability management and remediation, red-teaming, enhanced security analysis and human workforce efficiency, and data privacy and security.

Despite AI’s significant potential for improving cybersecurity, many government and industry stakeholders have raised concerns about its implementation, innovation trajectory, and impact. For instance, it is critical to ensure that any novel security vulnerabilities introduced by AI technologies are assessed and mitigated. As policymakers grapple with the complexities of harnessing the best and regulating the worst of AI, they must strike a healthy balance between adoption speed and risk mitigation.

1. Threat Detection and Incident Response
AI can streamline and, where appropriate, automate cybersecurity incident response and remediation. When designed and implemented properly, AI systems can react instantly when a threat is identified, executing actions such as isolating affected devices or blocking malicious Internet Protocol (or IP) addresses. This rapid response capability is crucial in minimizing the window of opportunity for a threat to cause extensive, or even irreparable damage.

Platforms integrated with AI automatically execute actions ranging from marking an email as spam to shutting down a compromised connection based on the perceived threat. AI-enabled automation not only accelerates the containment of threats, it also significantly reduces the burden on human cybersecurity teams. It can equip practitioners with advanced analytics and recommendations, allowing them to make more informed decisions about next steps in their cybersecurity strategy and to focus on more complex, integrated aspects of cybersecurity management.

Moreover, AI-enabled endpoint monitoring and detection is rapidly becoming a cornerstone of cybersecurity defense. It ensures that the end-user or remote devices do not become weak links in the security chain by providing real-time protection against potentially malicious activities. AI’s transformative and evolving role in vulnerability management within cybersecurity underscores the importance of remaining informed on the latest technologies and taking proactive steps to embrace, adopt, and adapt to emerging technologies optimized for cybersecurity.

AI-driven platforms also enhance threat identification processes, sifting through vast datasets to identify potentially malicious files and detecting new malware strains that traditional methods, such as rule-based or signature-based detection, are likely to miss. In phishing detection, AI solutions rapidly scan emails, scrutinizing content and attachments to identify phishing attempts, particularly those that do not match known phishing signatures.

2. Vulnerability Management and Remediation
AI is significantly advancing the domain of vulnerability management within cybersecurity. For instance, the ability of AI-driven tools to generate secure code, provide intelligent recommendations, and scrutinize existing code for bugs, vulnerabilities, and other security gaps is improving how developers approach secure code development. This is an area in which generative AI (GenAI) is already showing great promise. These tools can significantly enhance efficiency and accuracy by augmenting—rather than replacing—human code development and analysis.

AI has also proven invaluable in the detection and remediation of known vulnerabilities and zero-day exploits—vulnerabilities unknown to software vendors and, therefore, particularly challenging to address. However, through deep learning, AI has demonstrated success in effectively predicting and detecting these vulnerabilities by analyzing patterns from previously identified exploits and extensive datasets of malicious and benign files.

3. Red-Teaming
When enhanced by AI, human-directed simulated testing and red-teaming are critical supplements to existing tools for identifying potential vulnerabilities and attack vectors in an organization’s environment. By simulating complex attacks, such as compromising critical infrastructure, practitioners can pinpoint areas of weakness and craft robust, preventative mechanisms to counteract potential threats. Similarly, red teams are using large-language models (LLMs) to efficiently design and conduct penetration tests representative of emerging real-world cyberattacks, such as social engineering and phishing attacks. GenAI could potentially become an invaluable tool for exploring branches and sequels in the inevitable (and never-ending) “cat-and-mouse” game of cybersecurity offense and defense.

Conversely, LLMs themselves are also subject to red-teaming operations. In this context, red-teaming AI (RAI) is used to assess LLMs’ security robustness. Cybersecurity teams engage these models in simulated attacks or prompt hacking to uncover bugs and vulnerabilities that could lead to unreliable or negative outcomes. This proactive approach and early discovery help practitioners identify and manage cyber risks before they manifest in the real world. Consequently, RAI is an effective method for Test, Evaluation, Validation, and Verification (TEVV) of autonomous or AI systems in defense applications, ensuring the probability of “high-cost events” remains low.

4. Enhanced Security Analysis and Human Workforce Efficiency
The cyber workforce gap is projected to widen in 2024 and beyond, exacerbated by the exit of current cybersecurity professionals due to low morale and burnout. Some leaders are turning to GenAI to address concerns surrounding this issue. For instance, routine tasks like software patching or updating detection signatures can be automated by AI tools to ensure timely execution and reduce human error. Other AI tools that use natural language processing (NLP) are trained to understand the context and semantics of human language in unstructured data sources like blogs, news stories, and research reports to find emerging threats.

AI also increases the accessibility of cybersecurity training to equip a more diverse talent pool to enter the cybersecurity workforce, expedites the technical upskilling pathway to more rapidly advance entry-level cybersecurity into more senior roles, and engages stakeholders in more realistic and timely training modules. By leveraging AI for tasks ranging from enhancing security analysis to human workforce training, policymakers can bridge the talent gap and strengthen digital infrastructure security resilience, addressing a critical need in today’s cybersecurity landscape.

5. Data Privacy and Security
Data loss prevention (DLP) is another key area in which AI has contributed substantially. DLP focuses on preventing unauthorized access to or transfer of data, mitigating the risk of potential leaks and any downstream ramifications of compromise. Unlike traditional DLP tools, AI systems employ advanced machine learning (ML) algorithms to identify and monitor both sensitive data and user behavior. For instance, an AI-driven DLP system might analyze email traffic to detect unusual behavior patterns, such as an ordinarily cautious employee suddenly attempting to send large volumes of data outside the company network. This capability to understand context and user behavior allows AI systems to identify potential security incidents that traditional DLP tools might miss. AI-driven DLP solutions address both compliance requirements and data protection needs by continuously monitoring real-time data to ensure sensitive information remains secure.

The importance of AI-driven privacy-enhancing technologies (PETs) is increasing due to the growing demand for data privacy and security among consumers, policymakers, and regulators. These tools can autonomously manage, enforce, and audit data privacy regulations and policies in real time. For instance, an AI tool used by a health care provider could automatically redact patient-identifiable information when sharing data for research purposes, ensuring compliance with regulations like the Health Insurance Portability and Accountability Act (commonly known as HIPAA). The integration of AI in these aspects of data protection and compliance signifies a major advancement in cybersecurity strategies, offering enhanced protection of sensitive data.

Another PET emerging as a potential game-changer in cloud security is homomorphic encryption. This technology allows computations to be performed on encrypted data without decrypting it first, maintaining data privacy even during analysis. For instance, a medical research company might use homomorphic encryption in conjunction with AI to securely analyze patient data stored within the cloud. Used with application programming interfaces (APIs), AI and ML applications have the potential to process this encrypted data directly, ensuring data privacy while extracting valuable insights—a crucial capability for sensitive sectors like health care, insurance, or finance.

Looking Forward
When combined with the best that human cybersecurity experts have to offer, AI can be crucial for identifying existing threats while anticipating and mitigating future threats. Human-machine teaming will create a more proactive, adaptive, and secure digital environment. With attackers already integrating AI into their offensive operations, network defenders must be equally prepared to incorporate AI into their defensive strategies to maintain a resilient security posture.

As we anticipate the emergence of new and more sophisticated AI cybersecurity applications, the message is clear for policymakers, practitioners, and the general public alike: We must commit to balancing concerns about AI’s potential harms with open-minded enthusiasm for its capabilities. Policymakers play a key role in this process, as they are uniquely positioned to ensure AI regulation strikes a balance between promoting innovation and protecting against emerging security threats and other risks. This also means we should spend more time educating policy leaders about AI’s strengths and limitations as well as its opportunities and risks. By strategically embracing the many benefits and solutions AI brings to cybersecurity, we position ourselves not merely as responders, but as thoughtful and forward-leaning leaders charting the course toward a more secure and technologically advanced future.