Future of AI Innovation Act Improves Responsible AI Innovation in Federal Government and Private Sectors
On April 18, 2024, a bipartisan group of senators introduced the Future of Artificial Intelligence (AI) Innovation Act, which aims to set the foundation for continued U.S. leadership in the development of AI and emerging technologies. The legislation responds to longstanding calls from researchers by incorporating key cybersecurity recommendations, such as the development of international standards, metrics, and AI testbeds; increased collaboration between the public-private sector and governments both domestically and abroad; and enhanced information sharing to drive secure AI research and development.
Alongside the prospective Framework for Mitigating Extreme AI Risks, Congress has been in a flurry to legislate AI—especially as it pertains to the many areas where AI and cybersecurity intersect. After many years of attempting to wrangle AI legislation, legislators now recognize how important it is to include relevant stakeholders in policy implementation. This aligns with President Joe Biden’s October 2023 AI Executive Order, which stated the intention to pursue a multi-stakeholder approach to AI for society’s benefit. The Future of AI Innovation Act is a commendable step toward promoting secure AI development and cross-sector collaboration in the United States.
The legislation introduces a number of welcome developments:
- Formal establishment of the U.S. AI Safety Institute (Institute) at the National Institute of Standards and Technology (NIST) to develop AI standards that ensure national security, public safety, and individual rights. According to the bill, the Institute will research “system and model safety, validity and reliability, security, capabilities and limitations, explainability, interpretability, and privacy;” develop standards for incorporating secure development practices for both generative AI and foundation models; and develop and publish “cybersecurity tools, methodologies, best practices, [and] voluntary guidelines” to protect AI models from vulnerabilities or attacks. We encourage the Institute to balance the need for specificity with enough flexibility to accommodate the breakneck pace of AI development.
- Creation of AI testbed programs between the National Laboratories, NIST, the National Artificial Intelligence Research Resource pilot, the U.S. Department of Energy, and the private sector to develop security risk tools, metrics, and testing environments for companies to assess their systems for capabilities and limitations. A welcome development, testbeds can help identify potential autonomous offensive cyber capabilities; vulnerabilities within the AI ecosystem; chemical, biological, radiological, and nuclear threats; and critical infrastructure risks.
- Establishment of the Foundation Models Test Program by the Under Secretary of Commerce for Standards of Technology, through the Institute’s director and in coordination with the Secretary of Energy. This program empowers vendors to test AI foundation models “across a range of modalities” including text, images, audio, video, and software code. These voluntary tests aim to improve and benchmark the accuracy, efficiency, and bias of foundation models.
- Formation of the International AI Innovation and Standards Coalition by the Secretary of Commerce, Secretary of State, and director of the Office of Science and Technology Policy. This global initiative seeks to encourage international cooperation on AI innovation and harmonize AI standards across nations. The coalition strives to foster a cohesive approach to AI development by promoting the global adoption of common AI safety practices, standards, and usage.
- Initiation of bilateral and multilateral AI research collaborations directed by the National Science Foundation (NSF). These collaborations focus on advancing AI research and development internationally, including sharing data and expertise. Leveraging existing NSF programs like the National AI Research Institutes and Global Centers, this development will enhance global scientific collaboration by driving coordinated innovation and secure AI development.
If implemented correctly, this bill has the potential to both improve the AI advancement ecosystem and support the development of standards that improve AI cybersecurity and minimize risks associated with responsible AI failures and AI infrastructure security vulnerabilities. Any step toward comprehensive AI security must align market forces and government incentives to ensure that organizations avoid the rush to market and deploy technology that is safe and secure.