Policy Studies Technology and Innovation

Flexible, Pro-Innovation Governance Strategies for Artificial Intelligence

Author

Adam Thierer
Resident Senior Fellow, Technology & Innovation

Media Contact

For general media inquiries and to book our experts, please contact [email protected].

Getting governance balance right—and ensuring that it remains flexible, responsive and pragmatic—is essential if the United States hopes to remain at the forefront of global AI innovation and competitiveness.

Executive Summary

Policy interest in artificial intelligence (AI) and algorithmic systems continues to expand. Regulatory proposals are multiplying rapidly as academics and policymakers consider ways to achieve “AI alignment” —that is, to make sure that algorithmic systems promote human values and well-being. The process of embedding and aligning ethics in AI design is not static; it is an ongoing, iterative process influenced by many factors and values. It is therefore crucial that we build resiliency into algorithmic systems. The goal should be algorithmic risk mitigation—not elimination, which would be unrealistic. As we undertake this process, there will be much trial and error in creating ethical guidelines and finding better ways of keeping these systems aligned with human values. As a result, one-size-fits-all, top-down (i.e., regulatory-driven) mandates are unlikely to be workable or effective.

This article summarizes how more flexible, adaptive, bottom-up, less restrictive governance strategies can address algorithmic concerns and help ensure that AI innovation continues apace. Various organizations are already working to professionalize the process of AI ethics through sophisticated best-practice frameworks, algorithmic auditing and impact-assessment efforts. Multi-stakeholder efforts are helping to build consensus around these matters. These decentralized “soft-law” governance efforts build on existing hard law in many ways. Ex-post enforcement of existing laws and court-based remedies will provide an important backstop when AI developers fail to live up to their claims or promises about safe, effective and fair algorithms. Existing consumer protection laws and agency product recall authority will play a particularly important role in this regard.

Government can play an important role as a facilitator of ongoing dialogue and multi-stakeholder negotiations to address problems as they arise. The National Telecommunications and Information Administration (NTIA) and the National Institute of Standards and Technology (NIST), which have already done crucial work in this regard, can form a standing AI working group that brings parties together like this over time on an as-needed basis. Government actors can also facilitate digital literacy efforts and technology awareness-building, which can help lessen public fears about emerging algorithmic and robotic technologies.

Stay in the know. Sign up for the R Street Newsletter today.