Applying Multistakeholder Internet Governance to Online Content Management
Summary
The content moderation policies of online platforms and Section 230 dominate many modern tech policy news cycles. Little wonder, since these laws and rules impose a specific balance of free expression and online harm mitigation with regards to technical freedom and responsibility. Yet, despite the significance and subtlety of such calculations, debates on these issues are too often conducted through op-ed pages and paid advertisements, not through open dialogue. Currently, few processes and structures seek to catalyze constructive dialogue among all of the relevant stakeholders. It is also increasingly clear that the depth required for any level of resolution is far beyond the scope of a single effort.
Against this backdrop, R Street’s multi-stakeholder internet governance project on content management endeavors to make progress toward a shared understanding of foundational content management concepts through an inclusive and bottom-up process. The project sets out to identify a set of concrete and specific intellectual buttresses for further discussion—including proposals that are the subject of active debate—by exploring specific challenges, opportunities and ambiguities.
The full report describes this effort, its philosophy of engagement, and the substantive output developed throughout the process. The hope is that platform managers’ and policymakers’ future actions will benefit from greater insight into the challenges and opportunities associated with content moderation and recommendation.
Point of Consensus
Identifying points of consensus was not the primary objective of this exercise, but was a consequence of the process itself. Below are broadly shared perspectives that emerged:
1. The standard for successful content management must not be the perfect and total prevention of online harm, as that is impossible.
2. Content management does not resolve the deeper challenges of hatred and harm. At best, it works to reduce the use of internet-connected services as vectors.
3. Automation has a positive role to play in content moderation, but is not a complete solution.
4. Automation carries its own risks for internet users’ rights, including rights to privacy, free expression and freedom from discrimination.
Propositions for Areas of Further Attention
Each of the following seven propositions represents a specific area that could potentially receive more attention from stakeholders in the content ecosystem, including from industry, civil society, academia and government; though as the report details, each carries its own challenges to realizing the potential benefits.
PROPOSITION 1: Down-ranking and Other Alternatives to Content Removal
Alternative methods of mitigation for content or accounts in violation of an online service’s policies, such as “down-ranking” or reducing priority and visibility, beyond a full removal or block. The result is continued accessibility but with reduced visibility.
PROPOSITION 2: Granular/ Individualized Notice to Users of Policy Violations
An increase in granularity and detail in the provision of individualized notices to users whose accounts or content are affected by mitigation methods triggered by policy violations.
PROPOSITION 3: Use of Automation to Detect and Make Classifications of Policy Content
The use of automation to evaluate content transactions and detect potential policy violations in real-time, particularly by smaller platforms.
PROPOSITION 4: Clarity and Specificity in Content Policies to Improve Predictability at the Cost of Flexibility
In contrast to Proposition 2 (regarding increased granularity in individualized ex post notices of policy violation), increased specificity and detail in the generalized ex ante statements of content policy themselves.
PROPOSITION 5: Friction in the Process of Communication at Varying Stages
Intentional introduction of friction into communications pathways, such as pausing automated sharing or re-purposing of content and prompting for additional input.
PROPOSITION 6: Experimentation and Transparency in Recommendation Engine Weightings
Modification of and visibility into back-end recommendation engines and presentation algorithms as a means of mitigating online harm, including in combination with other propositions, such as the use of automation (Proposition 3) to engage in down-ranking.
PROPOSITION 7: Separate Treatment for Paid or Sponsored Content
Application of different standards to content that potentially violates policies based on whether the content is organic, paid or sponsored by the speaker, including payment for placement or prioritization in various forms.
Key Takeaways
The process reached consensus on some points, and greater consensus could be possible with further articulation of each of the propositions and associated consideration.
A multi-stakeholder process will not reach consensus on every issue through any process, as there are asymmetric assumptions and normative considerations, which is to be expected and accepted as inherent to inclusive processes.
The richness of the considerations surfaced through constructive discussion is valuable, and a scaled-up convening such as one led by a government body, if similarly designed to be inclusive and constructive, could add substantial value to ongoing internet governance conversations.
Image credit: Yurchanka Siarhei