…There are, I think, plenty of reasons to be skeptical of new AI regulation. For example, various technologists have pushed back on the idea that the technology, even in much advanced form, will have the power and potential to pose an existential risk to humanity. (Many have rightly noted, moreover, that current technology really isn’t “AI” at all.) And, yes, malicious actors may abuse the technology, but there are obvious and non-obvious ways that others can work to counter such baddies—including via the same technology (sorta like how your spam filter battles spambots). For those interested, Adam Thierer of the R Street Institute has a great, frequently updated primer on much of this discourse…

Thierer adds, moreover, that the U.S. tech sector owes its world-beating status to a “permissionless innovation” approach that other nations, particularly in Europe, have eschewed to their detriment. And he’s right to note that countries like China aren’t going to slow down their AI efforts just because we have. Other observers, such as former FTC official and current “innovation evangelist” Neil Chilson, have pointed out that the same people who wanted a new “digital regulator” two or three years ago are doing the same dance today for AI—even with the same legislation. (Same goes for Section 230 and content moderation.) Sure looks like they’re just groping for problems that can justify pre-existing government solutions…