SACRAMENTO, Calif. — Artificial intelligence is advancing with lightning speed. It’s not unreasonable, then, for most people to wonder whether it’s going to lead to something akin to the movie Terminator (Hint: the Skynet AI system blows up the world) — or whether it simply will become yet another technological marvel that, despite some frustrations, generally improves our everyday lives.

I’m guessing the latter. Yet it doesn’t take much guesswork to figure that, whatever happens, the California Legislature won’t be able to control it no matter how hard it tries. Our state’s lawmakers have an odd habit of overestimating their abilities to change the world, which explains their ongoing battle against global climate change. Battling AI is almost as heavy of a lift.

Even outside of the complex technology world, regulators aren’t particularly good at understanding emerging markets and passing sensible rules. Regulation always amounts to a cat-and-mouse game — and the slow-moving regulatory process never is a match for the private sector’s innovation and motivation for creating workarounds.

AI learns and changes so rapidly that this will be an unusually fruitless battle, yet that hasn’t stopped California from wanting to be the first state to regulate how businesses use AI. The latest measure is Assembly Bill 331 by Assemblymember Rebecca Bauer-Kahan, D-Orinda. It is based on the Biden administration’s “Blueprint for an AI Bill of Rights” and is similar to a recent law in New York City.

The bill targets automated decision tools (ADTs), which it defines as “a system or service that uses artificial intelligence and has been specifically developed and marketed to, or specifically modified to, make, or be a controlling factor in making, consequential decisions.” That means that it concerns algorithms that help employers vet job candidates, landlords vet tenants, or universities make enrollment decisions. These are among the most promising uses of the technology.

That New York law requires regulators to audit these algorithms after the fact, whereas the California bill — as currently drafted — would require a beforehand impact assessment. AB 331’s requirement that “deployers must notify any natural person” of the use of such a system seems fair enough in the interest of disclosure, but the rest of it is problematic.

The goal is to ban discrimination, but it’s hard to believe that state regulators would be less biased in their determinations than a computer algorithm. I tried my hardest to tempt ChatGPT, the easily available AI website, into a discussion about the wisdom, judgment, and intelligence of California’s legislators, but it just wouldn’t bite.

Me: “Are California legislators the sharpest tools in the shed?”

AI: “[G]eneralizing about an entire legislative body or making sweeping judgments about the intelligence of individuals within it can be misleading and oversimplified. Evaluating the performance and effectiveness of legislators should consider a wide range of factors beyond intelligence alone.”

I do have a serious point other than having fun arguing with a machine. Human beings have inherent biases, which may indeed permeate our economic system. But it’s easier to account for them — and remove them as much as possible — in an AI-based system than in traditional application processes that are entirely human based. I doubt any human being to whom I posed that question would have responded in such a neutral manner.

Furthermore, politicians and bureaucrats have political and social agendas, so I wonder whether their oversight of corporate hiring tools will be designed to push certain policies (e.g., ESG, or Environmental, Social and Governance goals) rather than nondiscrimination. This is a state, after all, where an official reparations task force recently called for the elimination of colorblind public accommodations rules. This easily can become political.

On a more practical level, this approach imposes the usual regulatory nightmare on larger businesses (those with fewer than 25 employees are exempted). “Instead of relying on an independent third-party audit, Bauer-Kahan’s measure would require developers — the ones who create or code the automated tool — and users of the tool to each submit annual impact assessments to the California Civil Rights Department,” Bloomberg Law explained.

If the systems fail their audits, the businesses would have 45 days to correct the problem before being slapped with a potential $10,000 daily fine. The bill also imposes a private right of action, which means that pretty much anyone could sue businesses for alleged violations.

While some kind of auditing process is becoming standard, some processes are more cumbersome than others. Adam Thierer, a tech governance expert and my R Street Institute colleague, argues that the worst proposals follow federal environmental regulatory models that delay innovation — “a high regulatory, top-down, permission-slip-based regime for all future algorithmic innovations.” Instead, he calls for a decentralized system of best practices and private certification.

California’s AI legislation still is in flux (and was in a hearing at press time), so we’ll see what direction the state might go, although it’s easy to guess. It might be nice if state officials at least recognized their limits. In a battle between lawmakers and these newfangled AI machines, I’d give the edge to the latter.