On Sept. 9, 2024, Gov. Gavin Newsom vetoed SB 1047, California’s attempt at a sweeping frontier AI safety bill. His veto message highlighted the problems with such a maximalist regulatory approach. Regulation on AI, Newsom argued, must be grounded in “empirical evidence and science,” and blunt mandates that apply “stringent standards to even the most basic functions” of AI systems without regard to deployment context or risk constitute the wrong approach. While California continues to seek its own regulatory regime, it seemed Newsom understood that it must have a solid analytical foundation for the state to export innovation rather than govern it.

Just 18 months later, Newsom signed an executive order (EO) that applies stringent standards and lacks the analytical foundation for which he had previously advocated.

Executive Order N-5-26 directs the California Department of General Services and Department of Technology to develop procurement certifications that require artificial intelligence (AI) vendors seeking state contracts to attest their policies on loose terms such as “harmful bias,” “human autonomy,” “manipulation of information,” and protection against “violation of civil rights and civil liberties.” The EO lacks definitions for any of these terms, offering no empirical framework for evaluating them, no threshold for compliance, and no appeals process for companies that don’t make the cut. What the EO does provide is vast bureaucratic discretion for unelected state agency officials to decide which answers to deeply contested social and political questions are appropriate and which merit losing access to the world’s fourth-largest economy.

Essentially, Newsom implemented the exact framework he decried in 2024.

His SB 1047 veto message was clear as to the problem with the bill: Applying uniform, one-size-fits-all requirements to AI systems without considering whether they are deployed in high-risk environments, processing sensitive data, or being used in critical decision-making is a flawed approach to regulation. Context is critical to evaluate the harm profile of an AI system. To apply the same criteria across all AI systems was too blunt a policy tool in 2024; yet in 2026, EO N-5-26 does exactly that. Holding an AI tool that summarizes state agency meeting notes to the same standards as one that attempts predictive policing goes against the governor’s own logic.

Another meaningful difference between SB 1047 and EO N-5-26 is that one went through the legislative process. SB 1047 followed a proper, politically accountable process with committee hearings, floor votes, public testimony, and revisions that sparked changes through rewrites with industry and civil society input. It. Issued unilaterally, the executive order allows state agencies to implement their own certification standards without the benefit of public deliberation. The EO’s ambiguous terms will be defined by administrative interpretation, completely absent legislative guidance. If Newsom had issues with the policymaking process of SB 1047, it’s unclear how he could be satisfied with his own executive order.

One particular provision that merits evaluation is Section 2—perhaps the most consequential part of the order. It directs California’s chief information security officer (CISO) to independently review federal supply-chain risk designations and bypass them if deemed “improper.” While the federal government has recently used that designation improperly, providing a state technology official with the ability to second-guess national security designations risks fragmenting procurement and further degrades a legitimate tool of national security. Additionally, because California’s CISO lacks access to the classified intelligence required to make supply-chain risk designations, it remains unclear how the judgment of what constitutes an “improper” designation will be made. Instituting state-level workarounds is not a remedy for the improper use of security authorities at the federal level.

The press release for the EO is framed as a response to the Trump administration’s federal AI efforts. It’s important to remember that state procurement policy should not be driven by who occupies the White House. Further, if the certifications outlined in the EO were defensible in any way, they would be able to withstand legislative deliberations and stand on their empirical foundations—the standard Newsom himself set in 2024. The fact that executive fiat was used to push this with undefined standards and partisan branding suggests they cannot.

Newsom was correct 18 months ago: AI governance should not substitute bureaucratic discretion for clear legal standards, apply context-free mandates, or bypass rigorous empirical analysis. His own veto message made the case better than I ever could, and I would encourage him to reread it.

Our Technology and Innovation program focuses on fostering technological innovation while curbing regulatory impediments that stifle free speech, individual liberty, and economic progress.