While artificial intelligence (AI) is not new, Google Bard, Microsoft Bing, ChatGPT and similar products have made the technology accessible and understandable to the average consumer. However, concerns over privacy and security have led to calls for a pause on AI development and an interest in heavy regulation. While these risks should be addressed, the overall benefits of AI, machine learning and large language models to cybersecurity and national security cannot be overlooked. Instead, policymakers should consider how the United States can fully leverage the technology in these spaces. There are three direct applications of said technologies on the individual, system and national level.

First, AI can benefit cyber defenders. In 2022, it took about 277 days to identify and contain a data breach with some causes of breaches exceeding 300 days to identify. Interestingly, there is an average savings of $1.12 million if breaches are contained in 200 days or less and organizations using AI and automation saved an average of $3 million. Speed and financial savings are important, but some studies show detection rates increasing. AI technology can serve as a key aspect of more efficient and timely threat detection by automating tasks that a human analyst might have to do otherwise, synthesizing larger and more complex data sets, and potentially better enabling less-skilled practitioners.

Cybersecurity-specific products using large language models have even recently emerged to aid defenders, and automatic threat assessments are now a reality. Meanwhile, other advancements are being developed, which might enable the analysis of potential malware in seconds. There are drawbacks to AI’s use in cybersecurity, including the quality of the available data to learn from, but like most aspects of security, no one solution should be fully relied upon. AI is not the sole solution to cybersecurity, but it can play a part.

Second, AI can benefit traditional systems essential to national security. A recent Senate Armed Services Committee hearing explored how AI and machine learning can improve Department of Defense operations. Notably, the fact that the United States has “trillions of dollars of major weapons systems that are profoundly vulnerable to cyberattack” was highlighted. Unfortunately, weapon system cyber vulnerabilities are not a new revelation. Testing showed that systems could be taken control of using relatively simple tools and techniques, and largely operate undetected.

However, the notion that these threats cannot be addressed without AI and the benefit it provides is gaining traction. One of the clearest applications is detecting anomalies and helping determine what constitutes a cyberattack. Even if our weapon systems were uniformly advanced from a security perspective, AI would offer a benefit, but AI becomes critical when they are not. Not to mention, the technology also offers an operational advantage to the military as Army Vantage has shown.

Third, AI can improve national security. The technology expands well beyond the United States and our adversaries are intent on maximizing it and its capabilities. As the Annual Threat Assessment of the Intelligence Community noted, “China is rapidly expanding and improving its artificial intelligence (AI) and big data analytics capabilities…” and China has directly asserted its desire to be the primary AI innovation center by 2030. Combine this with the fact that China engages in widespread data collection and is not constrained by the rule of law, and they have a built-in advantage. China is certainly not going to abide by a pause in AI development or respect best practices developed by the United States or its allies.

This does not mean the United States should advance AI without any guardrails, but failing to see it as a strategic priority and fighting to stay ahead produces severe risks to the nation. This also means the government and private sector alike must ensure AI is as secure as it can be because our adversaries will seek to exploit any vulnerabilities. Recent investments announced by the White House are helpful, along with existing work by the National Institute of Standards and Technology (NIST) through their AI Risk Management Framework that builds upon its cyber and privacy frameworks. Likewise, plans for hackers to publicly evaluate generative AI systems at DEF CON 2023 are positive and bring to mind past examples like “Hack the Pentagon,” where the government engaged with hackers to discover vulnerabilities.

While negative and concerning aspects of AI have received a lot of attention, its positive and important applications cannot be ignored, especially as Congress, the White House and regulators forge a path forward. Failure to recognize AI’s cybersecurity and national security benefits risks putting us behind our adversaries or missing a cyber vulnerability.

Stay in the know. Sign up for R Street’s Newsletters today.

More Cybersecurity Policy