AI Dialogue with China: Talk, But Don’t Sign Anything
Artificial intelligence (AI) has emerged as a potential agenda item for next week’s summit between President Trump and Chinese President Xi Jinping. A dialogue between the two leading nations locked in technological competition over AI development is welcome news and could have genuine value, as continuous communication among key players can play an important role in managing any AI-associated risks as they emerge.
It is important to note that extending dialogues to formal agreements between great powers has a fraught historical record. Productive dialogues can drift into binding constraints that can undermine the parties who are willing to honor them while providing advantages to those who are not.
When Dialogue Becomes Constraint
History offers two distinct warnings about the limits of formal agreements between great powers on technological development. The first is when a party formally exits a treaty and builds according to its own requirements while former partners remain constrained. The second is when a party stays in the treaty but defects anyway, appearing compliant while continuing to develop in secret. Both failure modes share common causes: the difficulty of verification and the low cost of defection. AI presents both of these risks to a great degree.
The Washington Naval Treaty illustrates the first failure mode. In 1922, a period of global strategic competition in naval technology culminated in what seemed like a reasonable arrangement. After World War I, dialogue between global naval powers resulted in a formal agreement signed by the United States, Great Britain, Japan, France, and Italy that imposed tonnage limits on battleship construction as a means of managing strategic competition. The treaty appeared to work for about a decade, with signatory nations building treaty-compliant warships and substituting strategic requirements for treaty compliance. When Japan withdrew from the Washington Naval Treaty system in 1934 and began unrestricted construction, the remaining signatories found that years of treaty-compliant planning had reorganized their navies around limitations their adversary no longer recognized. The treaty had not inhibited strategic competition; it merely reorganized the competition in a way that favored the party willing to defect from the arrangement.
The Biological Weapons Convention (BWC) of 1972, which sought to ban biological and toxin weapons globally, illustrates the second failure mode. Immediately after signing the BWC, the Soviet Union began to secretly develop biological weapons on a massive scale. South Africa and Iraq were later revealed to have done the same. Unlike Japan’s formal exit from the naval treaty system, Soviet defection from the BWC was invisible. The nations complying with the treaty constrained their biological weapons programs while the Soviet Union built one of the largest in history behind the shield of a treaty. The BWC failed for the same reason as the Washington Naval Treaty did, but the outcomes were worse because the compliant parties never saw the defection coming.
The takeaway from these examples is not that dialogue between global powers is ineffective. The lesson is that extending the dialogue into formal constraints in a domain where compliance verification is difficult and defection from the agreement is hard to detect provides a clear risk of asymmetric outcomes: The party that honors the agreement is constrained, while the party that defects gains a distinct edge.
The Risks of This AI Dialogue
AI development presents challenges similar to those faced by the signatories of the Washington Naval Treaty and the BWC. It is nearly impossible to count model parameters in a way that correlates with capability, and a compliant AI research program is indistinguishable from one that violates the terms of a hypothetical international AI treaty. There is no equivalent of naval tonnage in the AI domain.
The character of the dialogues is important as well. When the Biden administration launched formal discussions between the United States and China in 2023, Beijing sent non-technical representatives from the Ministry of Foreign Affairs. Substituting diplomats for engineers and scientists greatly limited the substance of the dialogue.
The Trump administration appears to be taking the same approach, with Treasury Secretary Scott Bessent leading the American side of the proposed discussions. While the Treasury Department would be appropriate for discussions on tariff negotiations and currency frameworks, it has no business establishing technical norms around AI safety. There is a risk of financial ministries negotiating formal frameworks, binding commitments, and verifiable metrics that look like deliverables from a summit. The discussions must avoid these at all costs.
Setting the Right Target for Discussion
None of this is to say that dialogues are a poor choice for America. Continuous dialogue, voluntary norms, and coordination among nations can be productive and do not require any party to constrain their progress at the risk of a competitor not reciprocating. The Cold War hotline between Moscow and Washington was valuable for maintaining communication between adversaries during crises—not for producing arms control treaties.
The Trump-Xi discussions offer a similar opportunity to identify shared concerns about AI development, establish crisis management channels, and build toward consensus about AI-enabled military systems. None of these goals would constrain American AI development.
Participants must keep a watchful eye on these discussions to ensure they do not drift toward enforceable limits on AI. If the United States volunteers to limit or control its AI firms, then other nations can leverage that to catch up to or surpass U.S. capabilities. This would result in long-term consequences for American prosperity and national security.
The administration is correct to engage with China on AI, but it must understand the end goal of engagement. Talking is good, but the Washington Naval Treaty and the BWC both started that same way.