A fifth principle for pragmatic content policy?
This week, the R Street Institute published a paper I wrote, Pragmatic Principles for Content Policy and Governance. While it’s easy to find thoughtful criticism and negative takes on what should not be done, this paper is an attempt at offering an affirmative public policy framework for the significant and persistent harms occurring in online fora– such as social media– where users congregate and communicate. It presents these ideas in the context of an American legal and political climate which bears unique practical obstacles but also some powerful market and public driven opportunities for change.
Briefly, the four principles the paper puts forward include:
1. Uphold, but do not privatize, the law – codify the court order standard, ensure adequate time for court order compliance, bar general monitoring obligations and privatized determination of legality, and study the scope and effectiveness of modern criminal law.
2. Protect consumers – provide guidance and resources for the Federal Trade Commission (FTC) 1) to receive and enforce complaints of procedural deficiencies, and 2) to study the state of content policy disclosures and determine if a rulemaking proceeding is desirable to calibrate disclosure obligations.
3. Empower critical community – authorize and resource National Telecommunications and Information Administration (NTIA) multistakeholder discussions, invest in National Science Foundation (NSF) research on real-world effects of internet technologies, and prioritize flexible soft law mechanisms over common law or politics to establish evolving standards of responsibility.
4. Target specific concerns with specific solutions – authorize and resource the Department of Justice (DOJ) to study trusted reporting of suspicious activity; pass the Child Safety Act and federal privacy legislation.
I think these four principles and their associated recommendations would provide traction on problems that all too often feel intractable, though I will not claim that my framework will solve them all. That’s because no policy change or legal intervention can solve these problems alone, as they are fundamentally human in their origin, not technological. The most any of us working to address them can hope to do is produce some sustainable improvement in the near term and a world where technology does not make matters worse in the long term.
While I’m confident in the paper and its four principles, I’ve been wondering if it needs a fifth. This thought comes from applying both a political lens — where many in Congress and others in positions of power are out for digital blood, particularly after the revelations brought by the Facebook whistleblower Frances Haugen — and from a substantive lens — because it feels like we do need a big shove on the levers of power if we’re going to effectuate true change. In particular, advertising and commercialization shape the world of online speech, activity, and liability in fascinating ways, and I think there is room for some amount of special consideration of such activity distinct from the moderation and management of organic speech online.
So, I present for the reader’s consideration an additional “Fifth Principle” to complement the four described in the aforementioned paper. This provision may not be ready for prime time in the same way as the other four, mostly because I can’t gauge how much practical impact it would have. I don’t know how many businesses would face increased liability nor at what scale, given the specific limitations and safeguards built into this. I also don’t know how many other businesses would feel a chilling effect based on the prospect of litigation even if the actuality is far narrower. I don’t think it’s too extreme, and I’m comfortable with the risk inherent in it, but I submit it here in hopes that Tech Policy Press readers may offer feedback.
This fifth principle would exempt commercial transactions from Section 230 immunity, but would include a number of safeguards, beginning with fee shifting provisions to limit abuse. In the context of advertising specifically, I propose a safe harbor to allow for a re-introduction of legal protection where a full registry of ads and targeting criteria is made available. Also I suggest a sunset provision, because limiting Section 230 immunity means handing decisional power off to courts, and a sunset provision gives Congress the opportunity to look at the evolution of caselaw and corporate practice over the next few years, and reevaluate the balance at that time. This is, by design, a basket of ideas that can stand together but can also offer value as additions to other proposals – for example, fee shifting to limit abuse seems like it would add value in any context where the protections of Section 230 are further limited.
The big part, of course, is the exemption from Section 230; a hallmark of the original four principles was that they proposed meaningful and impactful government intervention without making any such changes. I provide fairly extensive justification for a commercial exemption, particularly when drawn as narrowly as I have done, in a manner that focuses the increase in risk on those parties who– in the classic theory of tort law sense– stand best poised to mitigate potential for harm. I also offer an impact analysis, or at least the beginnings of one. As powerful as this principle is on its own, it’s best understood within the context of the four principles in my original paper, still centering the core of the change we need on strengthening a critical community around platforms.
The Fifth Principle: Separating organic speech vs. commercial content
Using legal mechanisms to calibrate the appropriate level of responsibility in moderating organic speech carries many drawbacks as compared to a more adaptive, community-determined approach. However, for commercial transactions conducted online, a different approach may be warranted. Some legislative proposals include differential treatment between organic speech and any paid or sponsored content, most notably the SAFE TECH Act which proposes an “accepted payment” exception to Section 230 among its provisions. Similarly, John Bergmayer of Public Knowledge has proposed that Section 230 immunity be waived for online ads and narrows his focus to the content of the ad itself.
Both pragmatic and principled arguments support differentiating commercial transactions in some way. Pragmatic arguments begin from the assumption that the status quo is not satisfactory and a meaningful change to current incentive structures is necessary to produce greater investments in responsibility by certain intermediaries. Changes that apply strictly to commercial transactions pose less risk in many dimensions than any policy that centers around organic speech.
The principled arguments are at least as clear as the pragmatic, and less dependent on assumptions. This would apply only to those companies that are actively being paid to provide a platform for specific content that results in injury, and any potential for greater liability would adhere only to the extent the companies are insufficiently responsible in the safeguards they take to mitigate the potential of harm.
For a variety of reasons, distinguishing commercial transactions from organic speech is important. However, whether that distinction requires differential treatment in law is a more difficult question. It also depends on the nature of the commercial transaction. Two types in particular are worth specific consideration: online marketplaces and ads.
Online marketplaces
The European Commission’s DSA proposal includes specific “Know Your Business Customer” obligations that require operators of online marketplaces to identify the “commercial traders” responsible for products, services and other listings within their marketplaces. Where harm arises from an online marketplace transaction and the originator of harm can be identified, such party can be held liable for injury arising from their product or content made available through the intermediary. In some contexts, it may be sufficient to ensure that the proper party can be identified and held liable to waive responsibility for intermediaries whose actions facilitate the harm.
California state law, in the context of strict liability for products, tells a different story. Two cases concerning Amazon’s role as a marketplace for third-party sellers recently held Amazon to be liable for the actions of a third-party seller, and found that Section 230 was not a protection. In both Bolger and Loomis, the appeals courts found that “Amazon’s control over both the product and the sales transaction formed the basis for its liability.” These courts dismissed the use of Section 230 as a defense on the grounds that liability arose from Amazon’s conduct (in making a defective product available for sale) and not the content of the third party seller’s listing itself (the text associated with the listing). The Loomis court rejected Amazon’s contention that it operates principally as a facilitator of a third party transaction, along the lines of a lender or an auctioneer. While the Bulger and Loomis cases are specific to California and its state strict liability laws, these examples are useful to illustrate substantive distinctions based on the proximity of a platform to the central vertical supply chain by which a product is made available and purchased.
In both the DSA and the California cases, some level of identification and vetting of a third party is necessary to absolve an online marketplace provider. The question that arises then is how to evaluate whether the intermediary’s vetting and harm mitigation efforts should be considered sufficient enough to avoid legal liability. There is no easy nor immediate answer to this question. Thus, while it comes with ample risk, creating some form of a statutory exception to Section 230 immunity for commercial transactions in online marketplaces that result in harm—essentially a codification of the courts’ conclusions in Bolger and Loomis, into federal law—would allow common law mechanisms to develop legal standards for sufficient responsibility for intermediaries who operate online marketplaces or advertising networks.
This responsibility hinges on the ability of a service provider to engage in fine-grained control over the third-party products and services offered through its marketplace. Where a platform retains the legal right to select certain products or services to display or promote, that right conveys the ability to monetize and make profit off of such a selection. Under this proposal, with this retention comes the legal obligation to invest in responsibility, and face the consequence of potential liability without the protection of Section 230 immunity. However, where a platform functions in something closer to an infrastructure capacity, for instance, by providing hosting or payment services to a third-party merchant, without retaining any legal right to choose specific products or transactions, that platform would retain its immunity, and along with it, the freedom to engage in additional filtering or other mechanisms without fear that the use of such precautions would jeopardize Section 230 immunity.
Online ads and proximity
Much of the same logic with online marketplaces applies to the context of advertising. If specific harm arises from an advertisement, it seems reasonable to inquire of the online intermediary what steps were taken to vet the originating party who produced the ad or paid for its placement. One distinction is that harm in the context of online ads can arise not merely from the content of the ad itself, but also in its targeting criteria. For example, if housing advertisements are intentionally targeted only to a certain race they violate existing anti-discrimination law. The scale of these harms is unclear and perhaps unknowable in the current state of ad tech and network operations.
A separate distinction arises with the question of proximity. Online marketplaces are typically a short chain of responsibility from the immediate provider of the product/service and the intermediary, whereas ads can travel through chains of intermediaries. Small, independent websites may be supported through advertising but in ways that lack functional control over the ads running on their site. Their choice is, at best, which advertising intermediary to choose, with many defaulting to a major network, such as Google, out of simplicity. Should injury arise from an ad run through one of these networks sufficient to justify civil litigation (and thus make the question of potential Section 230 immunity relevant), it is unclear where the responsibility for vetting and harm mitigation should begin and end.
It is reasonable to believe that courts would not assign liability to such a small service provider. In a hypothetical scenario where such a provider faces liability without the protection of Section 230 immunity, any reasonable standard that emerges from common law ought to excuse companies that lack meaningful knowledge of the placement of the harmful ad and any fine-grained control over the content within the inventory from which the ad was drawn. However, courts are not perfect, and reasonable treatment is not guaranteed. Even the possibility of liability can cause political and reputational harm to a company, which cannot be reversed entirely with subsequent vindication. Actual costs also apply in such circumstances, although once caselaw develops (assuming it is good, and consistently followed), the increased costs would presumably become smaller over time, but not to the level that they are today with Section 230 immunity intact.
A better approach that achieves a similar goal would fine-tune the removal of Section 230 immunity in the context of online ads only where an ad itself was the source of injury and only for defendants with sufficient proximity (such as contractual privity) to the ad who hosted or prioritized the ad in exchange for payment. Such a proximate relationship to, and control over, the selection of individual content to display or promote conveys an incredible opportunity for profit through the use of algorithmic curation and sponsorship. Such profit can fairly be used to pay increased costs associated with responsibility for the selection. By preserving Section 230 immunity for intermediaries lacking in proximity, such services would be able to run additional filtering or other mechanisms downstream of the content selection process without fear of courts finding that their choice to do so subjects them to liability if their downstream mechanisms fail to catch all injurious content.
Impact analysis of a proximate payment immunity limitation and harm mitigation
Even an exception to Section 230 immunity narrowly scoped to online marketplaces and proximate ad placement would substantially increase the number of intermediaries who face the potential for liability. It would include not only large advertising networks, but also services like Craigslist that host sponsored content, as well as platforms like Airbnb that review third party listings for potentially illegal activity. Of course, those new services included within the scope would not immediately face any liability even in the event of injury; rather, they would need to prepare to bear the costs and burden of showing in civil litigation that their safeguards against harm are reasonable and sufficient should an injury arise from one of their listings.
For manual transactions, heightened review seems fairly straightforward with increased care in existing processes. Automated transactions, such as programmatic advertising, are more challenging. However, properly balanced, this change should not bar automation from the ad tech ecosystem or any other intermediary market. The use of automation does mean that mistakes can be made and harm can slip through the cracks leading to injury; however, this can happen with human review as well. The standard of liability would account for whether the safeguards built into an automated system are determined to be reasonable in the context of the platform’s policies and the resulting harm. While “reasonable” will evolve over time in the use of automation in ad tech and other contexts, focusing at this level allows for the use of expert witnesses in litigation as a means of overcoming expertise limits and increasing the likelihood of correct judicial outcomes.
What “reasonable” behavior means in the context of harm mitigation by intermediaries is a very difficult question. Asking courts to resolve the question depends on a rational evolution of subsequent case law, for which the sample size of events in years to come may be small and a single bad precedent could carry echoes of harm. Additionally, multi-stakeholder processes and the normal evolution of technology may develop a durable and clearly accepted standard of sufficiency that could be included to re-introduce immunity for all sponsored transactions under certain circumstances. To account for both of these possibilities, building a sunset provision into the limitation of proximate sponsorship immunity for a reasonable period of time, such as five years, would help ensure that the balance of restriction and innovation is preserved.
To further limit unintended externalities that could arise from such a change to section 230, it may be worthwhile to include fee shifting provisions to penalize bad faith plaintiffs and limit the abuse of litigation without good cause. For example, in patent law, an issue space often perceived to enable bad faith litigation as a means of extracting revenue through settlements rather than outright victory, fee shifting is possible under the relevant statute—and a recent Supreme Court decision, Octane Fitness, loosened the standard for its use.
Targeting commercial transactions for immunity waivers is more appropriate than the use of automation for amplification. Adding friction to all forms of automation—not merely those that involve specific transactional payments—would expose to greater liability a broader swath of businesses, services and functionalities, creating much more potential risk. A focus on automation writ-large reintroduces speech as the nexus of harm, including speech activity not associated with any sort of business relationship or transaction as a paid advertisement, sponsorship or placement.
Empowering the critical community through an effective ads registry
Unlike products or services made available through online marketplaces, online ads are themselves a form of speech, albeit in a commercial context. It’s worth probing whether the same arguments that place critical community in the center of setting and enforcing responsibility standards still apply in this context, and allow soft law forces to be more effective and more forgiving than common law mechanisms. However, the functioning of the critical community depends on sufficient transparency into intermediary practices to allow for effective identification and understanding of harm as it occurs, and for a ready dialogue between service providers and researchers. In the context of online ads, such transparency does not exist.
This gap does suggest the possibility of a safe harbor—a means by which an online advertiser could earn back immunity from liability. In principle, all ads run by an ad network could be captured within a central advertising registry (see, e.g., this framework in Canada for political ads), along with identity information for the publisher of the ad and the tracking criteria selected by that publisher for placement. If that registry is then made available to civil society and academic analysts and researchers, suitably indexed and searchable, the critical community would have the opportunity to engage in effective monitoring. The result would be a substantial increase in harm identification and the ability to wield massive soft power pressure on advertisers to improve their diligence and harm mitigation practices, creating a more agile and effective means of ensuring responsibility than the relatively high-cost and inefficient development of common law.
The proposals for consideration in this section are significant, and further validation of their potential scope and effect would be valuable. Also, alternatives are possible that need not extend to modifications to Section 230 at all. For example, mandating the compilation and offering of an ads registry directly would render moot the proposed ads exemption and its attendant safe harbor. And, in practice, the precedents of Bolger and Loomis above may work to realize the same outcome goals as the proposed exemption for online marketplaces without statutory change—at least in California—and there may be a pathway to reinforce that expectation through federal law more narrowly.
Proposals for Consideration:
- Exempt Section 230 immunity for commercial transactions through online marketplaces.
- Exempt Section 230 immunity for the proximate provision of online advertisements.
- Create a safe harbor to allow online advertisements to receive immunity if their content and targeting criteria are made effectively available for free through an ads registry.
- Sunset the above exemptions five years after the effective date of the statute.
- Include appropriate fee shifting provisions to limit abusive litigation.
Conclusion
As I’ve written before, the right answers for public policy in the space of online speech and liability “are often balanced on the blade of a knife.” But overall, I believe the ideas I’m putting forward are balanced for where the internet is today. Despite the inherent uncertainties as to scope, I share them here for feedback because we all need to tolerate a bit of risk and uncertainty in our policymaking just as we must in every other facet of our lives. Let me know what you think – you can find me at mchrisriley.com.