Hello, and welcome to this week’s edition of The Future In Five Questions. At yesterday’s AR/VR Conference in Washington, I happened to notice that Adam Thierer, an author and analyst at the free-market R Street Institute, was speaking on a late-afternoon panel about generative AI and the metaverse, so I thought I would catch up with him. Adam was one of our first interviewees for Digital Future Daily for a little news item that happened to pop up in April 2022 about Elon Musk potentially buying Twitter.

Since then, Adam has been hard at work tracking the rapidly growing thicket of legislative proposals around AI, including pulling double duty yesterday on a Brookings Institution panel about “frontier AI regulation” featuring U.S. Rep. Ted Lieu (D-Calif.). We talked about the unusual partisan alliances the AI age has already inspired, the death of the drone dream, and what he thinks the government can do to better educate the public on tech. An edited and condensed version of the conversation follows:

What’s one underrated big idea?

I’m currently exploring neurotechnologies, and trying to wrap my head — excuse the pun  around how those systems could profoundly change mental wellbeing and health. There is an enormously complicated learning curve, both technologically and from a regulatory standpoint, and it’s an evolving one, but there are a lot of scholars interested in how they might change and improve human wellbeing.

Debates about human enhancement have been with us for a long time. One of my favorite stories is Plato’s “Phaedrus,” where he has the god Theuth meeting King Thamus, and the god is going to give humans the gift of the written word and the king shoots back, “No, you shouldn’t do that, because then they’ll lose the cognitive capability to remember long folk tales and tell them around the campfire.” They both had a point because the god was correct it would have benefits, and the king was right in saying we’re gonna lose our ability to remember long passages. Using information technologies to expand or enhance human capabilities will always inspire a raging debate, but one that we should be open to considering before shutting it down based on risk.

What’s a technology that you think is overhyped?

I’m a little bit worried that either because of policy, or just the technology not panning out, that the commercial drone revolution is going to not come about the way I foolishly predicted.

I figured that when Bezos made the big investment and bought Whole Foods, it was going to become a landing and launching pad for his fleet of drones. A lot of the public response is just NIMBYism [Not In My Backyard], local communities not wanting any part of having drones in the skies. But another big part of it is technological, the challenges of creating efficient and widespread drone service. They’ve been relegated to fun and games, you see them at Brookstone in the mall and fireworks shows, and I think that’s a real shame.

What book most shaped your conception of the future?

Virginia Postrel’s “The Future and Its Enemies.”

She almost perfectly laid out what battles about the future of technological progress and technology, in general, would look like. She identified two camps of thinking about the future, “dynamists” and “stasists.” The stasis mindset defends the status quo, and values the present, or a particular potential future, and is willing to utilize certain legal or social instruments to try to hold the status quo in place. Whereas the dynamists are willing to embrace an uncertain, messy future where there are a lot of unknowns.

This is now playing out in the AI wars, in a huge way. At the Brookings event I was just debating people who were very concerned about what the future might hold, but unable to show conclusively how their fears might happen. They were saying, based on a hypothetical, worst-case scenario, we should freeze progress in certain ways, or at least regulate it very aggressively. I’m more of the mind that we should take every day as it comes and allow trial and error to work its magic. This book laid all this out, at a time when the internet was just being born.

What could government be doing regarding technology that it isn’t?

There’s always a need for more technological literacy. Whether it’s debates about child safety, or AI policy, or advanced medical devices, government can inform the public and consumers about risk trade-offs. It’s what’s called in the literature “risk communication.”

Technology is evolving very rapidly, sometimes exponentially, and public policy continues to evolve incrementally at best. There have got to be some second-best alternatives put on the table that are doable. Education and literacy approaches are not regulation, but they are a gap filler until you get regulation when it’s needed.

What has surprised you the most this year?

The rapidity with which we have moved into a full-blown war on computation and computing is astonishing to me. We have four hearings this week alone, plus an “AI Insight Forum,” whatever that is. We’re talking about pretty sweeping controls that I thought the United States put to bed during the late nineties’ “Crypto Wars.”

It’s not just people who are more market-oriented like myself who are worried about this. A ton of people in the open source world are terrified that they’re the canary in the coal mine. We have major proposals on the table for sweeping licensing regimes, massive restrictions on data centers and chips, a new regulatory oversight agency, and a potentially international one on top of that. At the more extreme end of things, you have proposals for widespread surveillance of these technologies.

Nine months ago, I don’t remember anybody proposing anything like this, and that’s astonishing to me.