From The Norman Transcript:

“This bill is going around state legislatures [across the country] and it’s super unconstitutional,” said Shoshana Weissmann, a fellow at the Washington, D.C.-based nonpartisan nonprofit think tank The R Street Institute. “It looks pretty much identical to the others.”

Weissmann said she often gets frustrated with the political left’s attempts to outlaw hate speech, since it is not necessarily illegal, but noted that social media platforms’ terms of service specifically define hate speech, and users must agree to not partake in hate speech. She said that it is unconstitutional to penalize companies’ methods of moderation.

“One, it’s stupid to suggest that hate speech must stay up — it’s the platform’s choice,” she said. “And two, it’s just unconstitutional. Just as the government cannot outlaw hate speech nor can it require hate speech to go unmoderated, that is a violation of the First Amendment discriminating on speech on an arbitrary basis.”

Although the senator defines hate speech in his own words, Weissmann said social media companies still have the right to remove posts that violate their own terms of service.

“It’s ridiculous to require a social media website to keep hate speech up for any reason,” she said. “…You don’t even have to be a First Amendment expert to see just how bad it is — and not just that, but then also Section 230 preempts state law, so this kind of violates 230, so you have that side of it that is incompatible with federal law and it’s just plainly incompatible with the constitution.”

Section 230 of the Communications Decency Act provides protections to social media platforms when they block or screen offensive material.

Weissmann said because of Section 230’s provisions, she believes this bill will be challenged and immediately struck down in court.

“A big reason 230 was passed and introduced in the first place was because before 230, there was this thing called the moderator’s dilemma,” she said. “[Because of this] a platform moderated nothing so that they wouldn’t be found liable for content. and if they did moderate and try to keep it family friendly on their platforms, then they could be found liable for speech they had no idea existed at all.”

Featured Publications