Policy Studies Technology and Innovation

Existential Risks and Global Governance Issues Around AI and Robotics

Author

Adam Thierer
Resident Senior Fellow, Technology & Innovation

Key Points

Proposals to impose the global control of AI through a worldwide regulatory authority are both unwise and unlikely to work. Calls for bans or “pauses” on AI developments are largely futile because many nations will not agree to them. No major global power is going to preemptively tie its hands by agreeing to not develop its algorithmic capabilities when adversaries are looking—either overtly or covertly—to advance their own.

As with nuclear and chemical weapons in the past, treaties, accords, sanctions, and other multilateral agreements can help address some threats of malicious uses of AI or robotics. But trade-offs are inevitable, and addressing one type of existential risk sometimes can give rise to another, including war.

Continuous communication, coordination, and cooperation—among countries, developers, professional bodies and other stakeholders—will be essential in heading off risks as they develop and in creating or reinforcing ethical norms and expectations about acceptable uses of algorithmic technologies. Many different nongovernmental international bodies and multinational actors can play an important role as coordinators of national policies and conveners of ongoing deliberation about various AI risks and concerns.

Media Contact

For general and media inquiries and to book our experts, please contact: [email protected]

Continuous communication, coordination and cooperation—among countries, developers, professional bodies and other stakeholders—will be essential in heading off risks as they develop and in creating and reinforcing ethical norms.

Executive Summary

There are growing concerns about how lethal autonomous weapons systems, artificial general intelligence (or “superintelligence”) or “killer robots” might give rise to new global existential risks. Continuous communication and coordination—among countries, developers, professional bodies and other stakeholders—is the most important strategy for addressing such risks.

Although global agreements and accords can help address some malicious uses of artificial intelligence (AI) or robotics, proposals calling for control through a global regulatory authority are both unwise and unlikely to work. Calls for bans or “pauses” on AI developments are also futile because many nations would never agree to forego developing algorithmic capabilities when adversaries are advancing their own. Therefore, the U.S. government should continue to work with other nations to address threatening uses of algorithmic or robotic technologies while simultaneously taking steps to ensure that it possesses the same technological capabilities as adversaries or rogue nonstate actors.

Many different nongovernmental international bodies and multinational actors can play an important role as coordinators of national policies and conveners of ongoing deliberation about various AI risks and concerns. Soft law (i.e., informal rules, norms and agreements) will also play an important role in addressing AI risks. Professional institutions and nongovernmental bodies have developed important ethical norms and expectations about acceptable uses of algorithmic technologies, and these groups also play an essential role in highlighting algorithmic risks and helping with ongoing efforts to communicate and coordinate global steps to address them.