Don’t let fear of artificial intelligence drive policy
Labor disputes within the film industry have become so common that they border on being cliché, but the most recent—and still ongoing—strikes were unique. They marked the first time the Writers Guild of America and the Screen Actors Guild-American Federation of Television and Radio Artists were on strike simultaneously since 1963, and their complaints revolved around a shared theme: artificial intelligence or AI.
Among the many issues at hand, “Both writers and actors are fighting for contracts that prevent an AI from replacing them at their jobs, whether it’s writing scripts or appearing as a background actor,” read a July 18 Verge article.
It’s not just actors and writers who are worried about AI replacing them in the workplace. Some tech workers, journalists, legal professionals and others are also concerned. Even professors fear that AI will give students the ability to conduct research and draft essays without putting in the work, but I’d be remiss if I didn’t note that current AI-generated essays aren’t always particularly great or entirely accurate.
Now, more than just unions are targeting AI. Lawmakers are mulling reforms to highly regulate these technological advances, and even Elon Musk joined the fray. In a joint letter, he said, “We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”
While it is important to understand the potential implications of new technologies and settle on commonsense safeguards, should we really be so suspicious of the emerging technology? To help answer that question, I went to ChatGPT, which is “an AI-powered language model developed by OpenAI, capable of generating human-like text based on context and past conversations,” according to its website. It is a recently launched technology that helped bring AI debates to the forefront.
Thinking of the writers’ strike, I prompted ChatGPT, “In 3 sentences, draft a concept for an Indiana Jones sequel that’s actually good.” Its response was lightning fast. “In ‘Indiana Jones and the Lost City of Legends,’ an ancient artifact emerges, leading Indy on a quest to uncover the mythical city of Atlantis. Alongside a brilliant young archaeologist and a wise historian, they must race against a shadowy organization to unlock the city’s secrets and prevent a cataclysmic event that could reshape the world.”
So far so good. ChatGPT continued, “The film blends thrilling action, historical mysteries, and character-driven storytelling, honoring the franchise’s legacy while introducing fresh elements to captivate both long-time fans and new audiences.”
Other than the concept’s last sentence being little more than needless blather, it is an intriguing movie idea. Indiana Jones, Atlantis and a shadowy organization? It has to be better than “Indiana Jones and the Kingdom of the Crystal Skull,” but is this ChatGPT movie concept so exemplary that it will put writers out of work? I don’t think so, but keep in mind that AI is still in its infancy, although it is rapidly advancing.
If unions work with employers to prohibit the use of AI, then that’s their right, but Americans need to be equally suspicious of burdensome government regulation that could stifle AI development. Senate Majority Leader Chuck Schumer, D-NY, is leading the charge to enact a framework to regulate AI, but Congress’ urge to do something quickly might backfire.
It could undermine its ability to act deliberatively and effectively, and it might hamper AI development—allowing other countries to surpass the U.S. as a technological leader—which is a distinct possibility. I’d wager that most of those in Congress barely contemplated AI before this year—meaning that there’s a steep learning curve.
Keep in mind that we have lived with forms of AI for years. Google Maps, troubleshooting chatbots and even text editors—like the sometimes annoying autocorrect feature on our phones—are all kinds of AI. They are far from perfect, but each provides a benefit. Other forms promise to do much more.
AI can help humans streamline their work, help diagnose illness, and be used for monotonous 24/7 work and more dangerous activities. “For instance, Robots are able to undertake hazardous tasks such as coal mining, sea exploration, help in rescue operations during natural disasters, and so on,” according to Forbes.
More than just this, AI can help reduce human fallibility and save lives. One example of this can be seen with autonomous vehicles, which, once perfected, could reduce accidents and incidents of drunk driving.
While the writers and actors seem to have joined forces with the Neo-Luddite movement, which unyieldingly opposes many forms of new technology, it is important to remember that AI is much like any other technological advancement. It has the potential to greatly benefit society. We just need to ensure that government regulation promotes safe and productive development.