DeepSeek’s cybersecurity failures expose a bigger risk. Here’s what we really should be watching.
The release of DeepSeek’s R1 model (DeepSeek-R1) on Jan. 20, 2025 dominated news headlines and sparked lively debates across the tech and policy worlds. Some are already referring to the release as AI’s “Sputnik” moment, calling for outright bans, reassessment of our chip export control policies, and even criminalizing downloads of the model. Others argue the United States should view this moment as a wake-up call and focus its efforts on advancing our ability to innovate and develop AI at home. As policymakers and technologists disputed DeepSeek’s impact on America’s technological leadership, market competition, and innovation strategy, many cybersecurity researchers emphasized DeepSeek’s numerous cybersecurity failures—from successful jailbreaking attempts to leaked chat histories.
Yet, despite myriad concerns, leading technology companies including Microsoft, Amazon Web Services, and Cerebras were busy finding secure ways to integrate DeepSeek-R1 into their platforms, services, and ecosystems. Within days, each of these firms had successfully done so. At first glance, this might seem like a contradiction—Microsoft, in particular, is the largest cybersecurity company in the United States. However, this jump to embrace DeepSeek-R1 highlights a reality that has been overlooked: Securing leadership in artificial intelligence (AI) is not just about whose models are the most accessible—it is also about who is the most secure, reliable, and trustworthy.
Instead of leaning into panic or reacting with restrictive policies, we should pay close attention to the following shifts that are already transforming the landscape of cybersecurity, innovation, and governance.
1. DeepSeek’s Rise Is Part of a Larger Trend, Not a New or Isolated Challenge
The ongoing attention surrounding DeepSeek-R1 may give the impression that it represents an unprecedented breakthrough; however, this is far from the first time an open-weight AI model was touted as a game-changer upon release. In fact, DeepSeek itself is not new. Founded in 2023, the company’s prior DeepSeek Coder series had already gained traction among AI developers in 2024. The latest model has simply accelerated that momentum, particularly because its reasoning capabilities appear to rival those of OpenAI’s GPT-o1 model while allegedly operating at a lower cost and on lower-end infrastructure.
But DeepSeek is only one player within the broader field of open-source AI development. Only days after the R1 release, Alibaba unveiled its Qwen 2.5-Max model, which has reportedly already outperformed R1 in several performance benchmarks. Despite this, Qwen 2.5 has received far less media and policy scrutiny even though it represents another major stride in China’s AI capabilities. Other major players, including France’s Mistral AI and America’s Meta continue to advance open-weight AI models, further challenging the notion that proprietary AI development will always hold an advantage.
This shift carries important national security, innovation, and cybersecurity implications. First, if China and DeepSeek’s claims about the R1 model’s performance capabilities hold true, the AI arms race between China and the United States may be closer than many have anticipated. This is a critical moment for us to define what it means to “win” and establish clear measures of AI competitiveness. Second, the line between open-source and closed-source AI models is blurring quickly, complicating traditional cybersecurity frameworks that assume greater control equals greater resilience. This also means open-source AI is here to stay, holding the potential to surpass even proprietary AI models in the future.
Finally, policymakers should recognize that AI development and deployment do not operate in complete isolation. Unlike traditional software or hardware, AI models rely heavily on shared datasets, distributed infrastructure, and iterative improvements from a broad ecosystem of users, researchers, and developers. This makes controlling or restricting AI more complex than simply banning a mobile application like TikTok or regulating hardware imports. Once an open-weight model is released, it remains widely accessible—meaning any restrictions within the United States would have little impact on global use.
2. Friend or Foe? The DeepSeek Debate Risks Missing the Point: Open-Source AI Shouldn’t Be All or Nothing
Recent reports about DeepSeek’s grave cybersecurity failures have also reinforced ongoing concerns about the national security implications of open-source AI development and use within the United States. For example, researchers recently discovered a publicly exposed DeepSeek database that leaked chat histories, application programming interface keys, and back-end details. This security flaw was not the result of a sophisticated cyberattack but rather a basic security lapse that left sensitive information exposed—a sobering reminder that we cannot ignore cybersecurity fundamentals when developing and deploying AI. Compounding this vulnerability, DeepSeek’s guardrails have repeatedly collapsed when tested, allowing researchers to successfully jailbreak the model with ease. In this case, the jailbreaks enabled researchers to extract malware scripts, phishing templates, keyloggers, and even instructions for incendiary devices. Unlike proprietary AI systems, which undergo extensive adversarial testing and security hardening, DeepSeek appears dangerously susceptible to cyberattacks.
Given all these cybersecurity flaws and China’s long history of conducting cyber espionage, intellectual property theft, and data exploitation operations against the United States, policymakers have every right to question whether regulatory action is needed to proactively mitigate security risks associated with DeepSeek-R1. One concern already articulated in the ongoing debates around DeepSeek’s impact is whether China could use its open-source AI models to collect data about American users. However, moving to ban open-source AI models like the R1 model would be shortsighted. Its open nature means that just as China could use the model to collect data on Americans, researchers and developers across the United States can just as easily deploy and study it, leveraging its capabilities to accelerate and scale AI innovation.
Microsoft’s decision to incorporate DeepSeek-R1 into Azure AI Foundry offers a blueprint for secure deployment. Rather than blocking it, Microsoft quickly brought R1 into a controlled, monitored environment, allowing users to work with the model without exposing sensitive data or expanding attack surfaces. A similar approach—running open-source AI models locally on air-gapped servers and enforcing encryption and access controls—can allow independent developers to study, customize, and leverage open-weight AI securely. Policymakers are right to be vigilant, but they should also recognize that overly restrictive bans on open-source AI models could hinder the United States’ ability to compete effectively in the ongoing AI arms race. For this reason, the United States should focus on building security frameworks and best practices that allow for safe use, just as Microsoft and other leading technology companies have started to do.
Steady Wins the Race—DeepSeek Should Reinforce Our Resolve to Be Leaders in AI
The United States has enjoyed an edge in AI innovation, but DeepSeek’s recent achievements serve as a reminder that our progress and leadership cannot be taken for granted. It should inspire action and collaboration, not panic or overcorrection that could undermine our strengths.
If there is a lesson to be learned from the DeepSeek-R1 model release, it’s that AI leadership is not just about who achieves exceptional model performance and accessibility first—it’s also about who can be reliable and trustworthy while implementing them securely and strategically.
America’s edge in technology has always stemmed from its robust private sector; world-class research institutions; and open, dynamic innovation ecosystem. Reactionary policies that restrict AI development under the guise of security could ultimately do more harm than good, stifling domestic innovation while failing to curb global AI risks. Instead, the United States should use this moment to be a leader in AI and emerging technologies by investing in AI security research and development, establishing risk-based AI governance frameworks, and expanding data centers and energy grids to support demands and efforts to scale. By leveraging the strengths and strategies that have historically propelled its technological progress, the United States will remain a leader in AI and emerging technologies for decades to come.