Government should go slow when regulating AI
As the March 20 editorial “Who’s responsible when ChatGPT goes off the rails?” noted, “it was good that the internet could grow.” So much other commentary forgets in the panics all the incredible ways technology and speech online have grown and served users. It is incredible that ChatGPT passed a bar exam and can create functional computer code.
To allow for continued growth, advocates should be wary of government efforts that might harm further innovation. The United States should welcome artificial intelligence greatly reducing the more than 90 percent of vehicle fatalities that are due to human error. AI can also predict cancer treatment success. The possibilities are endless.
Furthermore, various AI and large language models (LLMs) are already implementing protective measures for users. When I ask ChatGPT to write a lawsuit against Ash for detaining Pokémon, the model begins with “[Disclaimer: This is a fictional scenario and should not be taken as legal advice].” The choice by programmers to include disclaimers reminds us that regulation can range from a full absence of regulation to full prohibition of an item or tool — and everything in between, such as disclosure.
If Congress wants to regulate AI or LLMs such as ChatGPT, its first priority should be allowing for the maximum level of experimentation and innovation, and not ending its potential before it has even begun.
Shoshana Weissmann, Washington