Applying the Precautionary Principle to AI Will Kill Tech Progress
R Street Institute Technology and Innovation Fellow Adam Thierer notes the proliferation of over 500 state AI regulation bills like the one in Hawaii threatens to derail the AI revolution. He singles out California’s Safe and Secure Innovation for Frontier Artificial Intelligence Models Act as being egregiously bad.
“This legislation would create a new Frontier Model Division within the California Department of Technology and grant it sweeping powers to regulate advanced AI systems,” Thierer explains. Among other things, the bill specifies that if someone were to use an AI model for nefarious purposes, the developer of that model could be subject to criminal penalties. This is an absurd requirement…
Instead of authorizing a new agency to implement the stultifying precautionary principle in which new AI technologies are automatically presumed guilty until proven innocent, Thierer recommends “a governance regime focused on outcomes and performance [that] treats algorithmic innovations as innocent until proven guilty and relies on actual evidence of harm.” And just such a governance regime already exists, since most of the activities to which AI will be applied are currently addressed under product liability laws and other existing regulatory schemes. Proposed AI regulations are more likely to run amok than are new AI products and services.