Enter the R Street Institute, a D.C. think tank that advocates for free markets, which recently set out to explore what a properly designed “multi-stakeholder” process to explore “problems at the intersection of harm arising from online content moderation, free expression, and online management policies and practices” would look like, with funding from the Knight Foundation. In a new report issued today, Applying Multi-Stakeholder Internet Governance to Online Content Management, authors Chris Riley and David Morar report on the process they arrived at and its prospects for driving “sustainable progress in online trust” that is the result of “constructive discussions in the open.”

After a nod to Eric Goldman’s excellent Michigan Technology Law Review paper, Content Moderation Remedies, Riley and Morar conclude that neither of the two recently established trade associations- the Trust & Safety Professional Association (TSPA) and the Digital Trust and Safety Partnership (DTSP)- both only around for a matter of months- appear to meet the standard to be considered “multi-stakeholder” in the context of internet governance, which “requires the inclusion of perspectives from industry, civil society and government voices in governance discussions.”

So, they set out to design their own model of what a true multi-stakeholder process would look like, in a methodical effort at facilitating participation from civil society, academia and the tech industry:

The goal was to create a set or framework of voluntary industry standards or actions through spirited but collegial debate. The objective at the outset was not to “solve” the issue of online content management, which is an unfeasible objective, but to generate a space for discussion and forward-thinking solutions.

Once underway, the group arrived at some key points of consensus, which include reasonable ideas such as that “content management must not be the perfect and total prevention of online harm, as that is impossible,” that “content management does not resolve deeper challenges of hatred and harm, and at best works to reduce the use of internet-connected services as vectors,” and that “automation has a positive role to play in content moderation, but is not a complete solution.”

Among the points of agreement that the report suggests the process led to is what is- and is not- possible to address with regard to content moderation in this mode of engagement.

Of particular importance to the stakeholders whose input shaped this process is the recognition that this work, like the space of content management more broadly, is not meant to address the full depth of harm in human connection and communication over the internet. Too often, content moderation is seen as the entire problem and solution for disinformation and hate speech, when it is not. We must all explore potential improvements to day-to-day of online platform practices, while at the same time invest in greater diversity, localism, trust, agency, safety and many other elements. Likewise, content moderation is not a substitute solution to address harms arising in the contexts of privacy or competition.

Having set some bounds for itself, the group then went on to develop a set of propositions accompanied by “positives, challenges and ambiguities” that would need to be more fully examined. These include ideas such as “down-ranking and other alternatives to content removal,” more granular or individualized notices to users of policy violations, “clarity and specificity in content policies to improve predictability at the cost of flexibility,” the introduction of “friction in the process of communication” to potentially reduce the spread of misinformation and other harms, and experimentation with more transparency in how recommendation engines work.

All of the ideas described are supple and nuanced enough to point to the potential of such a multi-stakeholder process. But, the group gathered for this prototype effort was small, and it’s hard to load test to see how it would scale. The R Street Institute authors take inspiration from the National Institute of Standards and Technology (NIST), which recently ran what looks like a successful process to arrive at “standards, guidelines and best practices” in cybersecurity, and the National Telecommunications and Information Administration (NTIA), which recently ran a process to arrive at some consensus on technology policy topics such as facial recognition. Indeed, they posit the NTIA may be the obvious vehicle to further pursue the problem of online content management.

But what of Hunter Biden’s laptop? How would this expertly facilitated process help us avoid such a scenario? I put that question to Chris Riley.

“My hypothesis is that these are inherently hard problems and platforms are – in most but not all cases! – trying to do the best they can in good faith, but stuck within silos,” he told me. “The idea of multi-stakeholder engagement is to help them have better information and factors for consideration for such decisions up front, seeing around inherent myopia / blinders (which we all have), as well as some better trust with civil society to draw on in times of crisis to expand the bubble a little bit. Practically speaking that means they’d be more likely to make a decision and stick to it – which doesn’t mean it’s necessarily correct (on some level there’s subjectivity here), but at least that it’s a clearer articulation of the company’s values in such a complex decision environment.”

That leaves the question of the politicians, though- and whether they are willing to come to the table in good faith. Senators such as Ted Cruz (R-TX) and Lindsey Graham (R-SC) jumped on the Hunter Biden laptop imbroglio last October, demanding hearings and invoking the First Amendment, even though they both know full well that the Constitution does not include any prohibition that would limit social media firms from deciding what content to host on their platforms.

Riley is optimistic that a robust multi-stakeholder policy process may ultimately settle the score.

“A lot of politicians want to carve a pound of flesh from big tech right now, but for different motivations, and if any agreement is found there it will come from policy,” he said.

Perhaps the NTIA will take on this difficult task. The Biden administration has yet to choose a permanent leader for it. When someone is in the chair, the R Street Institute process may be a ready playbook. Meanwhile, bills such as the Online Consumer Protection Act, which would demand social media companies provide more specific terms on content moderation practices, including what actions prompt moderation and on what grounds, and clarity on how users are notified about and might appeal such decisions- creep along in Congress, while the next election cycle looms.

Featured Publications