From Nextgov:

Social media companies need to further enhance transparency around the content moderation requests that they receive from governments to better safeguard users’ speech online, according to a Republican senator who has focused on tech-related policy issues during her time in office.

Sen. Cynthia Lummis, R-Wyo.—who serves on the influential Senate Commerce, Science and Transportation Committee—said during a virtual event hosted by the R Street Institute on Tuesday that it is “an unprecedented time in history” for freedom of speech and the manner in which governments regulate public discourse, particularly “the real time failures of government when it comes to interacting with companies that provide platforms for speech on the internet.”

Many Republican lawmakers have claimed in recent years that social media companies are censoring conservative users on their platforms, with government moderation requests becoming a key focal point of their concerns.

These complaints were further amplified by the recent release of the Twitter Files—internal documents from the social media giant which purported to show government involvement in its content moderation decisions. While the document releases have been highlighted in conservative circles as evidence for concern, they have been largely derided by critics as proof of the mundanity and frequency of content moderation requests that social media platforms such as Twitter receive on a daily basis.

To combat perceived online censorship of conservative voices, congressional Republicans—and even some states—have directed much of their ire at Section 230, a portion of the 1996 Communication Decency Act that enables online platforms to host and moderate third-party content without being held liable for what their users may post or share. Section 230 has become a political flashpoint for both parties, with many Democrats expressing concern that it does not do enough to limit the spread of harmful online content, and Republicans contesting that it stifles free speech.

Given the disparate political views around Section 230, Lummis seemed to advocate for a more targeted approach to addressing concerns about content moderation requests that originate from government entities. She said that Congress currently has two options: ”we can reform Section 230, which is discussed a lot here on Capitol Hill, or create more stringent requirements for transparency around government moderation requests.”

“And, at this juncture, increasing transparency with regard to government requests for content moderation does seem to be the best move,” she added, saying that Twitter users should know if and when the White House, for example, requests that policy-related content be removed from the platform.

Lummis cited the PRESERVE Online Speech Act—legislation that she co-sponsored in 2021—as one way of enhancing the transparency of these types of moderation requests. The bill would require social media platforms “to issue a public disclosure containing specified information related to a request or recommendation by a government entity that the service moderate content on its platform.” House Republicans introduced similar legislation at the start of the 118th Congress earlier this month.

“The government needs to be very careful about how they wade into regulating social media platforms, so as not to stifle free speech,” Lummis added.

Some social media platforms are already working to publicly disclose the types of content moderation requests that they receive from governments around the world. Meta—the parent company of Facebook, Instagram and WhatsApp—maintains a transparency report on government requests for user data, which shows that governments have collectively filed hundreds of thousands of moderation requests with the company annually, including more than 237,000 requests from January through June 2022 alone.

Kaitlin Sullivan—Meta’s director of content policy and a panelist at the R Street Institute’s event—said that the company works to notify users “in almost every case when their content is removed for violating our community standards,” as well as when “their content was removed or restricted in a particular jurisdiction on the basis of a formal government report, which is generally that that content violates the local law.”

But Sullivan said Meta is sometimes blocked from disclosing these types of moderation requests, including for FISA orders, on national security grounds and in some countries “that give us legal orders, and then have gag orders that come with those, for the companies where they cannot disclose to the user or to the public what the request was, and who it was from or why.”

Featured Publications