So, count me as grateful for those with both the imagination and technical chops to anticipate and address AI’s security risks. But as Adam Thierer of the R Street Institute argues, securing our future doesn’t mean shutting down innovation. Instead, it means fostering what he calls a “positive innovation culture”—one that doesn’t default to treating AI as dangerous until proven safe, but rather leaves space for creative problem-solvers to act when harms are real and knowable.

And even when real harms do emerge, a positive innovation culture is the default we need in order to solve them. On the cutting edge of innovation—especially in a competitive landscape teeming with new entrants—solutions are more likely to emerge from decentralized, bottom-up experimentation. Where they’re permitted to do so, AI companies are already iterating on content policies, refining safeguards, and updating limitations to prevent misuse.