One of Melvin Kranzberg’s six laws of technology states that “technology is neither good nor bad; nor is it neutral.” This is ever so apparent in the process of moderating content online. Not in the once-prevailing narrative of anti-conservative bias, but in the implicit biases that both human and automated content moderators demonstrate — and its impact on marginalized communities.

Implicit bias is a phenomenon in which people who strongly profess to being open-minded actually show racial, gender or other biases in their actions. In the classic Implicit Association Test, participants find it easy to match white faces with positive images and minorities with negative ones but struggle at the same test when the pairings are swapped. Put those subjects under cognitive stressors like time pressure, and those discrepancies only worsen.

From the overwhelming workload to the productivity demands, to the unclear criteria on what constitutes illicit content, cognitive stressors dominate the content moderation field. Given these conditions, it should be no surprise that when a content moderator sits down at their desk, facing a queue of thousands of cases with little guidance and a short time frame in which to complete their work, the biases that live deep within their unconscious mind easily slip to the surface.

These biases ultimately bleed into the decisions they make when deciding what user content can remain on internet platforms and what comes down.

Instances of racial discriminationremoval of documentation of human rights abuses and restrictions of LGBTQ content point to not just a surfacing of this bias, but a takeover. For example, it took just 15 minutes for Facebook to delete high school teacher Carolyn Wysinger’s post stating, “White men are so fragile, the mere presence of a black person challenges every single thing in them,” for violating its community standards for hate speech. Similarly, video reports of the April 7 chemical attacks in Douma, Syria, were quickly removed from YouTube for displaying violence. Recently, a group of video-makers sued YouTube and parent company Google, claiming both discriminate against LGBTQ-themed videos and their creators by removing advertising from videos featuring “trigger words” such as “gay” or “lesbian,” labeling LGBT-themed videos as “sensitive” or “mature,” and restricting them from appearing in search results or recommendations.

One might think that artificial intelligence could work around this very human problem of implicit bias, but in fact, it could make things worse. An automated system’s accuracy in moderating content relies on how it is trained and programmed — by human developers. Developers have successfully trained AI-based tools to moderate certain content, such as child sexual abuse imagery and copyright infringing content, with little evidence of implicit bias. This is because these categories offer a wealth of material with which to train AI tools and clear parameters regarding what falls into these categories.

Extremism and hate speech, on the other hand, are much trickier to moderate. The criteria for identifying this content is vague and inconsistent. These categories also involve a range of nuanced variations related to different groups and regions, meaning context can be critical in understanding whether it should be removed. For the time being, AI is unable to detect these nuances. As such, human input is necessary, and it will inevitably come with implicit biases.

Fortunately, there are plenty of approaches companies can take to reduce implicit bias in content moderation. Companies should focus on building diversity in the workforce, which has been shown to reduce implicit bias by exposing content moderators to people of different stripes. Standards for illicit content would also help make the moderation process more predictable and less discretion-based.

But those important ideas won’t rise to the surface if politicians insist on talking about unproven conservative bias rather than the kinds of bias that are known to exist.

We need a radical shift in the political conversation relating to bias in content moderation from mythical issues to ones that affect a large portion of our society. Implicit bias is a phenomenon that can never be truly eliminated; it is part of our humanity. It can, however, be minimized through awareness, advocacy and the standardization of moderation guidelines across the industry. The rest of us can increase awareness of its existence through reporting, discussion on the Hill and in society.

So let’s talk about it, and let’s make a real change for the better.

Featured Publications