This analysis is based on breaking news and will be updated. To connect with the author, please e-mail [email protected].

R Street recently explained all the functional problems with S.1993, the bill proposed by Sens. Richard Blumenthal (D-Conn.) and Josh Hawley (R-Mo.) to remove Section 230 protection from artificial intelligence (AI).

Below are 10 examples of how this legislation would affect America’s favorite tech tools. On one hand, the user should absolutely be liable in each of the following scenarios; on the other, if S.1993 passes, AI’s potential would wither and its benefits would be vastly restricted. Any video, photo or text edit would present an opportunity for liability in court.

1. Using ChatGPT to edit an essay containing unlawful content

Say an individual writes a threatening letter that goes beyond First Amendment protections. They may then instruct ChatGPT to edit the letter, describing it as “parody” in order to circumvent safety protections. If ChatGPT complies, and the letter reaches its intended recipient, that person could sue ChatGPT.

2. Using Adobe Photoshop to apply Generative Fill to an image used for blackmail

Someone might have an image that depicts a man kissing a woman who is not his wife. That person may intend to mail a copy to the man alongside a demand for money to keep his secret. If they use Photoshop’s Generative Fill function on the image, then Photoshop would lose Section 230 protection for it.

3. Using Grammarly to rewrite user-created libel against Taylor Swift

Grammarly now offers AI features. If a person wrote an article stating that Taylor Swift cheated on Travis Kelce and had Grammarly AI edit it, Swift would be allowed to sue Grammarly.

4. Using Reclaim AI to move a terrorist group meeting on a calendar

Reclaim AI helps users reorganize tasks and meetings. If a terrorist group uses the technology to update its calendar, using vague terms to conceal what they are actually doing, Reclaim AI would be liable.

5. Using GitHub’s Copilot to create code for unlawful purposes

Perhaps someone requests coding help from GitHub in order to hack another entity or use ransomware on it. Even if it’s unclear to the AI (or to any sensible onlooker) what the code will be used for, GitHub would still be liable.

6. Using Google Gemini to correct math and grammar in fraudulent tax documents

If an individual uploads tax information intended to deceive the Internal Revenue Service and uses Gemini to review and edit it, Google would be liable for tax fraud even if they didn’t know the information was inaccurate.

7. Using Wondershare Filmora to edit a video containing false advertising

Say a cheese company makes video ads claiming that their products cure cancer. If they use Filmora to edit these ads, then Wondershare would be liable for their content.

8. Using Vimeo to create a training video for terrorists

If an individual submits content to Vimeo’s AI to help create a video, Vimeo may not know what the video will be used for. If it’s used to train terrorists (or for some other nefarious purpose), Vimeo would be liable.

9. Using Voicemod to mask the voice of someone making threatening phone calls

If a person creates a recording of themselves saying something threatening, uses Voicemod to mask their voice and then calls someone using that audio file, Voicemod would be liable.

10.  Using smartphones to take photos

Smartphone cameras are increasingly used not only to capture what we see, but also to estimate what we see using AI. Since AI is built into the very core of the camera’s function, smartphone manufacturers could be liable for every picture taken with their devices.

All of this shows how even small and seemingly innocent uses of AI could implicate companies in lawsuits. Because it’s impossible to know if content will be used in illegal ways, it’s unclear how these companies could comply with the law without removing all AI features from their products. The resulting deluge of lawsuits could bring AI development in the United States to a grinding halt. Lawmakers like Sen. Hawley who have expressed concerns about the potential use of AI to censor conservatives should be especially skeptical of this legislation. Without Section 230 protections for AI content, companies may be forced to preemptively review any content created with their products and censor those that are controversial or potentially illegal to avoid liability. Therefore, S.1993 could lead to the very censorship that Sen. Hawley fears.