February 1, 2024

Representative Darrell Issa
Chair
Subcommittee on Courts, Intellectual Property, and the Internet
Committee on the Judiciary
U.S. House of Representatives
2108 Rayburn House Office Building
Washington, DC 20515

Representative Hank Johnson
Ranking Member
Subcommittee on Courts, Intellectual Property, and the Internet
Committee on the Judiciary
U.S. House of Representatives
2240 Rayburn House Office Building
Washington, DC 20515

Dear Chair Issa and Ranking Member Johnson:

As the House Judiciary Subcommittee on Courts, Intellectual Property, and the Internet takes on the important issue of how artificial intelligence will impact our lives, we, the undersigned civil society organizations, write to express our collective concerns with overbroad attempts to address potential problems posed by the use of generative artificial intelligence to produce material that appears to depict real performances, but, in reality, does not. Current concerning proposals before Congress include the No Artificial Intelligence Fake Replicas and Unauthorized Duplications Act or “No AI FRAUD” Act[1] and the “Nurture Originals, Foster Art, and Keep Entertainment Safe Act of 2023” or the “NO FAKES Act of 2023.”[2] 

We all share the goal of maximizing AI’s benefits while minimizing harms to artists, creators, and the general public. Unfortunately, the No AI FRAUD Act and NO FAKES Act sail wide of the mark, sweeping in far too much legitimate speech, increasing the risk that performers and their heirs could lose control over their performing rights, and endangering wide swaths of non-commercial, First Amendment-protected activities. As the Committee considers these important issues, we encourage Members to examine ways that existing law can address potential harms and pursue narrowly targeted legislation as needed. 

Current law in many states already offers protection against the unauthorized use of people’s likenesses or voices for commercial purposes. Called “right of publicity” causes of action, these laws generally give people the ability to control when and how their likeness could be used for profit, often where the likeness is valuable or reflects an investment by the individual. The No AI FRAUD Act and NO FAKES Act riff on this concept, purporting to give people an intellectual property right in their likenesses and voices to prevent their use in content they did not participate in creating. But, in execution, these bills likely would cause more problems than they would solve, creating widespread free expression concerns in the process.

The bills would establish causes of action against unauthorized uses of digital depictions, defined broadly to include any time a likeness of a person’s image or voice appears in an image, video, or audio recording in which they did not actually perform, and, in the case of the No AI FRAUD Act, in which the media was digitally created or altered.[3] The scope of the bills is overbroad and problematic. They include virtually any likeness or sound reproduction of a person that the person did not participate in creating. This could cover a wide range of First Amendment-protected speech, including political cartoons, parodies, music samples, news podcasts, and other media. The NO FAKES Act does attempt to exclude the use of digital replicas in some important contexts, including commentary, parody, satire, news and sports reports, and documentaries. The No AI FRAUD Act also states that the First Amendment is a defense to liability and instructs courts to consider whether the use at issue was commercial, necessary for the work in which it was included, and whether it adversely affects the work of the depicted person.

These exemptions are important and necessary, but, in their current form, likely do not provide sufficient protection from expensive lawsuits that could chill valuable speech. The contours of liability for any speech-harm are inherently blurred and courts routinely struggle with where to draw the line, even when assessing well-understood causes of action. Creating new and broad causes of action that apply to First Amendment-protected speech with unclear protections for free expression will not only be difficult to parse for courts, but for users, social media platforms, and providers of AI tools, as well, creating a chilling effect on legitimate uses of this technology at every stage of content creation and distribution. Moreover, these defenses get the law exactly backwards — the Supreme Court has instructed that the burden is on the government to prove speech is unprotected, not on the speaker to prove that it is.[4] 

Both of the bills would impose strict liability on anyone who created or shared covered content without authorization, regardless of whether it was done for a commercial purpose or for any other reason. To put a finer point on it, under the No AI FRAUD Act, creating a meme of an actor, politician, or anyone else without licensing their image could result in an infringement lawsuit with penalties of up to $50,000 (up to $5,000 in the case of the NO FAKES Act). That speech would then be saddled with the expense of a lawsuit (or threat of a lawsuit) to prove what should be obvious: the speech is protected by the First Amendment. Even the threat of lawsuits could have an immense chilling effect on constitutionally protected speech.

In addition to incentivizing lawsuits against individuals, styling these new causes of action as intellectual property rights raises additional concerns for freedom of expression online. Section 230 of the Communications Act does not protect interactive computer services providers against claims pertaining to intellectual property. That means everyone whose digitally created or altered likeness appears on the Internet can claim it is a violation pertaining to intellectual property and bring lawsuits against platforms. The lawsuits would be as endless as the Internet is creative, and the platforms, wanting to maintain their businesses, would be incentivized to block or take down any content that lead to an IP rights lawsuit, without much if any interrogation. The platforms would inevitably remove wide swaths of First Amendment-protected and valuable speech.

Moreover, these new property rights could actually undermine their stated goals of protecting performing artists by increasing the risk that performing artists would lose control over their performances, likenesses, voices, and other intellectual property.[5] These bills would make it possible for AI-generated performances to be created without the involvement of those depicted, living or dead, in any way, as long as, at some point in the past, licensing rights had been granted. Record labels, movie studios, talent managers, and other powerful actors would thereby be incentivized to acquire these rights in their entirety, as early as possible in an artist’s career, foreclosing their ability to control their participation, and compensation, in the use of their performance for AI-generated content in the future. If the goal of the bills is to protect performing artists, additional thought and changes are needed to ensure artists will not be in a worse position if these bills became law.

Finally, the No AI FRAUD Act also would create a cause of action against those who distribute “personalized cloning services,” defined in a manner that may sweep in many technologies that can alter an image, video, or audio recording. This would mean that typical smartphones – as well as any computer, laptop, or tablet – which allow users to edit video, images, and audio recordings, might invite a deluge of lawsuits.

We applaud Congress for its desire to protect consumers and creators. Unfortunately, the No AI FRAUD Act and NO FAKES Act would make the situation worse rather than better. Congress should carefully consider how existing laws can solve potential threats from emerging technologies and fill any gaps through narrowly tailored solutions that preserve free expression online and are consistent with the Constitution. If you have any questions, please contact Kate Ruane, [email protected]

Sincerely,

American Civil Liberties Union
Center for Democracy & Technology
Electronic Frontier Foundation
Fight for the Future
Foundation for Individual Rights and Expression (FIRE)
R Street Institute
TechFreedom

Read the full letter below:


[1] https://files.constantcontact.com/1849eea4801/695cfd71-1d24-4146-a453-3dab7d49babd.pdf

[2] https://www.coons.senate.gov/imo/media/doc/no_fakes_act_draft_text.pdf

[3] https://www.eff.org/deeplinks/2024/01/no-ai-fraud-act-creates-way-more-problems-it-solves

[4] Riley v. Nat’l Fed’n of the Blind of N. Carolina, Inc., 487 U.S. 781, 793 (1988) (”But this statute suffers from a more fundamental flaw. . . . Under the Act, once a prima facie showing of unreasonableness is made, the fundraiser must rebut the showing. . . . This scheme must necessarily chill speech in direct contravention of the First Amendment’s dictates.”); Freedman v. Maryland, 380 U.S. 51, 58 (1965) (“[T]he burden of proving that the film is unprotected expression must rest on the censor.”).

[5] https://rightofpublicityroadmap.com/news_commentary/draft-digital-replica-bill-risks-living-performers-rights-over-ai-generated-replacements/; https://www.eff.org/deeplinks/2024/01/no-ai-fraud-act-creates-way-more-problems-it-solves