SACRAMENTO — I still chuckle about a legislative hearing from a few years ago where state lawmakers ultimately approved new restrictions and taxes on electronic vaping devices. They were voting on complex regulations involving different types of relatively complex items but clearly didn’t know much about any of them. When a witness passed around a vape pod, the legislators examined it as if viewing an artifact that had fallen to Earth from Mars.

That’s my go-to example about the legislative and regulatory process, where regulatory and legislative officials make high-impact decisions about technologies they generally don’t understand. Sometimes, they let on about how little they know, and it can be rather funny. In 2006, U.S. Sen. Ted Stevens (R-Alaska) described the internet as a “series of tubes.” Then there was the time then–Vice President Joe Biden asked an aide for “the Web site number.”

We all make gaffes. Politicians aren’t expected to be techies, which is why they rely on lobbyists and staffers to draft their legislation. Nevertheless, it might help if elected and regulatory officials showed at least a modicum of humility when trying to regulate any business — especially those that specialize in complex, technological systems that require much expertise to understand.

That brings me to the latest efforts by the administration and state governments to regulate the internet in the name of protecting us from some inexplicable danger. This week, the president issued an executive order that seeks “to reduce the risks that artificial intelligence (AI) poses to consumers, workers, minority groups and national security,” Reuters reported. Biden said, “To realize the promise of AI and avoid the risk, we need to govern this technology.”

Maybe the president has learned a few things about URLs since his gaffe about website “numbers,” but I still don’t trust him or his administration to “govern” any technology. As the Washington Post’s Josh Tyrangiel explained, the executive order is sweeping in length – encompassing 20,000 words: “[I]t’s an epic work of bureaucratic pointillism, giving nearly all of the government’s 15 executive departments a variety of AI responsibilities and reports.”

It’s bad enough when any one agency gets ahold of a regulation — imagine what will happen when 15 departments get their claws into it? Tyrangiel gushes that, despite the broad authority, the EO empowers Commerce Secretary Gina Raimondo to oversee this Byzantine work. Raimondo is smart, and I previously applauded her efforts as Rhode Island governor to reform that state’s pension system. But even the smartest bureaucrat can’t govern a technology.

This is hubris. Consider this one section from the AI EO: “In the workplace itself, AI should not be deployed in ways that undermine rights, worsen job quality, encourage undue worker surveillance, lessen market competition, introduce new health and safety risks, or cause harmful labor-force disruptions.” None of us have any idea how this new, promising (and a bit scary, too) technology will develop, but the feds and unions already are trying to get their paws on it. I doubt they understand it enough to protect “job quality.” It doesn’t help that Raimondo already is vowing to hold accountable the industries that have agreed to work with her.

The EO will stifle innovation. “Newcomer startups may not have the capital required to ‘meet extensive testing and regulatory requirements like the AI giants can,’ Evan Reiser, cofounder of cybersecurity software firm Abnormal Security,” explained to Forbes magazine. “Many of them are currently building AI models and tools using open-source models, which tend to be cheaper to use and more flexible to customize, as their foundation,” Forbes added.

Anyway, it’s always a bad idea to give the federal government — and 15 departments within it — carte blanche power over any industry. One of the odd things about AI is that no one really knows exactly how it produces its results, as the algorithms are something of a black box. The feds can’t even competently regulate straightforward business models. And shouldn’t the administration, you know, support legislation rather than issue an edict? Rest assured, none of this will protect the public.

At the state level, we’re seeing a rash of technology legislation that will be equally ineffective at protecting “the children.” Utah has sparked this Luddite-like technology rush after, in 2021, it passed the Utah Social Media Regulation Act, which won’t go into effect until at least five other state legislatures pass substantially similar laws. Basically, the measure tries to protect kids from online smut by requiring mobile devices sold in the state to have device filters.

There are myriad problems here: It’s costly and difficult for manufacturers to include these devices on their operating systems. The devices need to capture content that conforms with each state’s specific obscenity laws. Some of those laws are rather vague and can easily include content that an ordinary person wouldn’t consider obscene. These laws empower a private right of attorney action — similar to the laws that have generated so much lawsuit abuse here in California.

Device-filter laws also might be unconstitutional. The U.S. Supreme Court struck down two similarly designed federal “decency” laws because “such a government mandate was an undue restriction upon adults’ access to protected speech, in large part because commercially available content filters ‘are less restrictive’ and ‘may well be more effective’ than the law,” wrote my R Street Institute colleague Josh Withrow, a technology expert.

Note that private technological solutions are probably more effective than any public law, per the court. There’s also the age-old solutions to such matters: Parents might want to take responsibility for their own families and oversee what their kids are viewing. Likewise, the states and feds ought to let new industries develop and stop meddling in technologies they don’t understand.