Chris McIsaac, a researcher at the nonpartisan R Street Institute, has studied AI policies enacted across the country and said he doesn’t view legislation as the answer to combatting deepfakes.

“It’s exceedingly difficult to write a law that is going to capture the current state of technology and certainly where the technology is going,” McIsaac said…

Some states have attempted harsh crackdowns on AI political content, including outright bans or criminal penalties, according to McIsaac. But those laws are hard to enforce and can run afoul of the First Amendment. 

For example, a California law attempting to prohibit online platforms from hosting AI-generated political content ahead of an election was struck down by a federal judge in August. A similar law that required labels on digitally manipulated campaign ads was also overruled.

That limits regulation of deepfakes to disclosures, which McIsaac says is an imperfect solution.

“The question is, are these types of disclosures really going to have an effect?” McIsaac said. “The jury is still out. I don’t think there’s great examples of those being implemented…”

McIsaac said elected officials and government bodies should instead focus on educating and communicating with voters about AI, as a response to a deepfake on social media is faster than any piece of legislation or court action could be. 

“I think a better approach is to think of this as more of a communications exercise and having campaigns figure out ways to quickly counter this false claim that’s floating out there and (put) the counterargument out there for why this is false and then (let) the voters decide who they believe,” McIsaac said.