The White House recently announced its sweeping executive order on artificial intelligence, greatly expanding the government’s oversight of AI development. While some have celebrated the Biden administration’s efforts, the growing flurry of heavy-handed AI regulatory proposals have the potential to adversely disrupt the AI marketplace and undermine US global competitiveness.

Regulatory advocacy organizations have called on federal and state policymakers to pass precautionary mandates on AI and machine learning technologies, listing a wide range of amorphous concerns from privacy and safety to discrimination. And though some of those concerns—often consolidated under the nebulous term “algorithmic fairness”—should be seriously examined, inserting government into a rapidly evolving and nascent industry will benefit no one.

In less than a decade, AI has evolved into a $100 billion industry. By 2032, the generative AI market is projected to explode to $1.3 trillion. AI’s far-reaching applications are evident by the countless sectors it has already begun to transform. In medicine, the applications of AI are potentially life-saving, from dramatically increasing access to health care to accelerating medical breakthroughs.

Yet, frantic calls from AI regulatory advocates have pressured lawmakers on both sides to introduce preemptive policies that threaten to derail that progress. The Biden administration’s comprehensive executive order follows countless proposals and policy statements from every level of government, including the administration’s AI Bill of Rights announced last year.

The Federal Trade Commission has announced plans to regulate AI in response to concerns of discrimination and bias. In Congress, legislators are being pressured to introduce bills proposing comprehensive top-down AI regulatory frameworks.

On the state level, the list of legislation introduced concerning algorithms and AI grows by the day. Colorado, Missouri, Maryland, and Rhode Island are working to establish commissions tasked with reviewing AI policy concerns.

Washington, D.C. proposed a rule to hold developers accountable for biases in decision-making algorithms. Meanwhile, Washington state went as far as to propose an outright ban on the use of algorithmic systems in government.

The cumulative effect is a complex and often conflicting regulatory regime that accomplishes little more than punishing small developers with fewer resources. For example, look no further than the EU’s AI Act, certain provisions of which the EU estimates may cost developers “€193,000-330,000 upfront plus €71,400 yearly maintenance cost.”

But the problems with top-down precautionary mandates such as the EU’s model go beyond fines and other financial burdens. Excessive regulatory requirements to address ill-defined concerns and values create a difficult to navigate regulatory environment that’s directly at odds with the permissionless innovation that has fueled the technological advancements of the 21st century. Instead, a decentralized approach, such as internal algorithmic audits and impact assessments carried out by private firms or other self-regulatory bodies, could more efficiently address concerns of bias and safety.

Anxieties concerning the dangers of AI aren’t all ill-conceived. AI in the hands of bad actors can be exploited to the detriment of others. Depending on the AI use, reasonable safety rules should be considered.

AI for driving vehicles has a very different risk profile from AI used to generate Instagram posts; there’s no one-size-fits-all regulation. Companies developing AI and machine learning systems are often best suited to identify the risks of their particular applications and should take appropriate preventative measures to ensure their technologies aren’t being abused. Misguided efforts rushing to preemptively impose obstructive mandates on AI threaten to put a rapidly innovating industry with countless societal benefits into a regulatory chokehold.

Overregulating AI in the name of “algorithmic fairness,” no matter how well-intentioned, will hamper development of technologies with the proven potential to save lives, and weaken the US economy, only to the benefit of international competitors.

Lawmakers should proceed cautiously before introducing burdensome regulations at the expense of US technological innovation. A light-touch approach is critical considering the vast potential AI promises.