History is riddled with examples of people dismissing emerging technology, so lawmakers’ recent hysteria over artificial intelligence should be no surprise. Several thousand years ago, in Plato’s “Phaedrus,” written language was a nascent discovery, perceived negatively because of a concern that it would “produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory.”

Today, the written word has improved lives, increasing our ability to communicate and making knowledge more accessible. Similarly, AI and its counterpart, machine learning (ML), can raise our standards of living. This is already happening in the medical field. AI has helped discover of new antibiotics to defeat superbugs, assisted in quickly detecting antibiotic resistance, has the potential to predict who will develop cancer, and has even given mobility to the paralyzed. Relatedly, AI could free physicians from the drudgery of administrative paperwork and allow them to focus on patients.

The medical innovation already emerging because of AI and ML is just the tip of the arrow. Many other industries, including energyretailfinance and automotive, are also reaping benefits. In the automotive industry, AI is used for self-driving cars, real-time driver risk evaluation and predictive maintenance tools. All of these programs have the potential to not only improve our lives but also make them safer. We risk limiting these advancements if we use a heavy-handed approach to AI regulation similar to those adopted by the European Union.

While the United States is ahead of other nations in this field, state and federal legislation that would stifle the growth of AI advances and applications is proliferating. This year, Connecticut passed a measure regarding public agencies and their use of AI. The act directs an inventory of all systems that use it and aims to ensure that these systems don’t result in a disparate impact or discrimination. This statute also affects private entities that do business with the state.

The General Court of Massachusetts is considering legislation regulating generative AI like ChatGPT. In New York City, a new law restricts the use of algorithms in hiring practices by requiring the disclosure of the use of this technology. While all of these seem fairly innocuous, these measures are redundant, undermine our leadership in this field, and could set us back in medicine, self-driving cars and fraud prevention.

Policymakers should be nimble and leverage existing authority rather than casting a wide net with new mandates. We must avoid adopting measures focused on system design and inputs, which risks stifling discoveries through burdensome bureaucratic and compliance requirements.

As automation is developed, we must avoid the precaution-based inclination when analyzing emerging technology. With so much positive potential, it’s vital that we have flexible, pro-innovation governance strategies for AI, such as enforcing relevant existing laws and using best-practice frameworks. AI, ML and algorithms have already demonstrated how they can enhance our lives, and we should embrace this rather than fear it.

AI may seem complex, but it’s nothing more than a machine that exhibits reason, and ML is simply a computer adapting and adjusting an algorithm without human assistance. Algorithms are mathematical models that act as a “set of instructions” and are fundamental to machine learning.

News reports suggest that AI will lead to our demise if we don’t impose burdensome regulations. Based on this portrayal, one would think that AI and ML growth and development currently occurs without any guidance, oversight or overarching government regulations, but that is not the case.

The National Institute of Standards and Technology has released its “AI Risk Management Framework,” which attempts to set research parameters for AI use. Additionally, the Food and Drug Administration has used its regulatory authority for years to approve AI- and ML-powered medical devices. Many other agencies, including the National Highway Traffic Safety Administration, also impose rules on these systems.

Plus, leading AI companies have created standards for self-regulation. Common law standards, including product liability, apply to these systems. Yet, this hasn’t stopped unbridled furor or prevented legislators from peddling a dystopian future if AI isn’t reined in. Like concerns about publishing the written word, this apprehension is likely to be surpassed by AI’s positive contributions to society.

Lawmakers should avoid snuffing out a revolutionary technology before it can fully benefit society.