Regulating “Big Tech” is no longer just a political slogan, it is now also a priority for some countries. Threading between perceptions of governmental censorship and an everything-goes approach are co-regulatory mechanisms, which some countries are exploring. Co-regulatory mechanisms are governance structures where government involvement is generally limited mostly to an oversight role and most of the actions are taken by other stakeholder groups. Two different approaches to instating co-regulatory mechanisms are the European Union (EU)’s Digital Services Act and the United Kingdom’s Online Safety Bill.

EU Digital Services Act
The Digital Services Act (DSA), a wide-ranging legislative text that creates new rights and obligations, makes a very clear distinction between different types of online services. It divides online services into four somewhat broad categories: intermediary services (loosely defined as those offering network infrastructure); hosting services (including cloud and webhosting); online platforms; and very large online platforms, or VLOPs (those that have significant reach within Europe). The four tiers are cumulative, in that all the obligations for intermediary services also apply to the next three types of services, the added obligations for hosting also apply to the next two and so on, with VLOPs having specific rules that only they need to follow. Among those VLOP-only rules, the DSA builds three distinct co-regulatory mechanisms: assessments, codes of conduct and audits. A brief description of the DSA’s co-regulatory mechanisms follows, with a longer explanation available here.

Assessments are created as annual obligations designed to uncover systemic risks, to assess their effects on several important topics (like fundamental rights and civic discourse) and to understand how platform choices across their own ecosystem influence systemic risks. The assessments are loosely structured along those lines by the government, however, VLOPs can create these assessments on their own, and subsequently build and deploy mitigation measures. Government oversight is also present, with both EU- and national-level regulators issuing guidelines and publishing comprehensive yearly reports on these assessments and ensuing mitigations. 

Codes of conduct, on the other hand, address industry-wide “significant systemic risk” and build in (voluntary) commitments to mitigations. The codes are drafted by multistakeholder structures, which  usually have to include VLOPs and industry groups, as well as other stakeholder groups. For example, civil society may also take part at the discretion of the EU Commission. The presence of government oversight is mostly concentrated in its role to assess the codes of conduct, and to consider whether they align with the needs and interests of EU citizens.

Audits are independent yearly requirements for VLOPs that verify compliance with both the DSA  and codes of conduct obligations. However, audits are not designed by the legislation outside of a minimal framework and guidelines on who can be an auditor. The DSA supports voluntary standards for audits, with government oversight showing up as future legislation that can create necessary rules. 

U.K. Online Safety Bill
The Online Safety Bill (OSB) follows the general British framework of legislation, going back to April 2019, when the government published the “Online Harms White Paper.” While the contents have changed and evolved over the years with several different amendments, the British way of dealing with online speech has not changed. The OSB has three specific platform categories (and a two-tier system for search engines), which includes, first, all in-scope (or regulated user-to-user services), second, Category 2b, which encompasses services with potentially risky functionalities, and third, Category 1, which denotes the highest risk user-to-user services. The inclusion of co-regulatory mechanisms across the three categories has not dampened the government-heavy approach. In fact, the three similar mechanisms—assessments, codes of practice and audits—are created very differently. 

Assessments in the United Kingdom are focused on risk from illegal content, as well as the safety of children and adults. The Office of Communications (OFCOM), the U.K. agency designated by the bill as the online regulator, first performs an assessment of its own, and then builds risk profiles that form the basis for the platforms’ own assessments. The assessments should then lead to platforms’ own mitigation and risk management measures.

Codes of practice are created by OFCOM, but the legislation explicitly states in Part 3, Ch. 6, 36 that it must consult a very wide range of stakeholders, including other government representatives, platforms, civil society, childrens’ and victims’ representatives, and numerous experts with relevance to online safety. Because they are created by the government, the codes—designed for compliance—have to pass through two different state institutions.

Audits are designed as a rarely deployed measure at the disposal of OFCOM, to check whether the platforms are complying with their requirements, the risks that stem from their lack of compliance and methods to mitigate that risk. While OFCOM is in charge of the audits, it can authorize others to audit as well.

Analysis
The two bills have different perspectives on governance: The United Kingdom focuses on an approach that builds around the regulator (OFCOM), whereas the EU builds from the initial framework of obligations and rights. The similarity between both is the lack of direct a role for civil society, as it inhabits a more muted advisory capacity for the United Kingdom and is a general participant at the table for the EU, in terms of the codes of conduct.

The vision of the United Kingdom is one where the state is still the strongest actor. One could argue that the co-regulatory aspect is barely present, however, the role of the industry is significant enough, and OFCOM’s main task is, while intense and all-encompassing, generally oversight. The merits of this approach are tied primarily to the cultural and political context of the jurisdiction, but they also highlight the wide range and flexibility of co-regulatory constructions.

Co-regulatory mechanisms are a good approximation of collaboration across stakeholder groups—governments understand the limitations of their knowledge and expertise in terms of the technical aspects of the tech industry; the tech industry acquiesces to a new and enhanced set of responsibilities and rules; and civil society is able to use its limited resources in a more effective manner. The mechanisms are also useful for the current global context, where legislators are acting within an environment that is light on information around how companies’ speech rules are made, an issue some companies are having difficulty with, as they set out preliminary frameworks from which more information and regulation can emerge. 

The United States, long the proponent of a hands-off approach in regulating speech, is entering a phase where either through judicial decree or legislative action, online speech, and the platforms that host it, will soon have to comply with new rules. Co-regulatory instruments have been adopted by both the EU and the United Kingdom, in various ways, and they may ultimately be a good way to conceptualize stronger restrictions on the actions of “Big Tech” within a First Amendment context.