The internet has altered every aspect of our lives. It has helped us launch political campaigns, begin romantic relationships, discover faraway places, document human rights abuses, and ensure that those subject to disasters are safe and have access to the resources they need. Much like any other great innovation, however, it also has its dark side.

Indeed, the internet has become a breeding ground for terrorists, a marketplace for human trafficking, a platform for child sexual exploitation, and a stage for hate speech and violence. To combat the presence of such terrible things, the job of content moderator was born.

Content moderators review and analyze user reports of abusive content found on platforms and decide, based on a predetermined set of rules and guidelines as well as the law, whether the content should stay up or come down.

The debates stirring in Congress and society relating to the role of content moderators have fueled many a baseless claim. Here are five of the most repeated myths.

Myth No. 1: Content moderators are part-time employees who work in less-than-ideal conditions.

The conditions tied to this type of work have spawned horror stories of content moderators working as contractors for as little as $28,800 per year under extreme micromanagement. In the Philippines, workers operate out of jam-packed malls where they spend over nine hours a day moderating content for as little as $480 a month. With few workday breaks and no access to counseling, many of these individuals end up suffering from insomnia, depression and post-traumatic stress disorder.

These workers are real, and these stories are true. However, another set of content moderators exists. These employees also often struggle to deal with what they come across online due to the nature of the job, but their working conditions are significantly better. They live in the Bay Area and are paid well. They are lawyers, veterans, former teachers, economists and consultants. They speak over 15 languages. They represent the initial vision for the content moderator.

The appalling working conditions of contractors are a direct result of the internet’s unforeseen explosion. None of these platforms ever could have fathomed having so many users, and no one could have foreseen the horrific videos, photos and posts that would someday find their way onto the internet. As a result, there has been significant outside pressure to further moderate this content, leading some companies to resort to hiring contractors to perform this work.

Many believe the answer is to simply bring all moderators in-house. While tech companies can afford to do so, the mental health impacts of the job remain. Indeed, in many ways, content moderators’ work resembles that of first responders and crime scene investigators. The difference is that for those jobs, employers have come up with ways to help their employees cope with trauma, including peer support programs, individual counseling, physical fitness programs and an allotted amount of time for work. Incorporating these solutions would make a world of difference for content moderators.

Myth No. 2: Content moderation is censorship .

Some see content moderation as a form of censorship; a way for organizations to exercise control over users’ speech by blocking comments, posts, reviews, search results and other types of content they deem undesirable.

The truth is, content moderation is not about censorship; it is about providing a healthy and safe environment where users can upload their own products, posts or comments and comfortably engage with others. It’s a tool to improve user experience, ensure that platforms adhere to local and global laws, and helps users trust that they can interact through a platform or use a service without fear of being deceived.

Flags and report buttons allow users to notify site owners when something seems out of place. Human moderators ensure that all users comply with community standards. Well-trained AI moderation solutions use filters to screen for inappropriate words, phrases and images to help weed out trolls, bullies and spammers. In other words, content moderators keep online spaces great places to be.

In a cyber world filled with extremism, violence, child sexual abuse imagery and revenge porn, there is scarcely time to think about censoring speech that does not align with an individual’s particular politics or viewpoint. Admittedly, moderators are human beings, so mistakes can be made. However, chances are that if content has come down, it is because it is at odds with the platform’s terms of service or policies, or the law, not with the moderator’s personal bias.

Myth No. 3: Tech companies can vet all content before it is uploaded.

Content moderation was never meant to operate at the scale of billions of users. Yet currently, 300 hours of video content is uploaded to YouTube every minute, over 95 million photos are uploaded to Instagram each day, and over 500 million tweets are sent on Twitter each day (that is 6,000 tweets per second). It is simply impossible for human moderators to vet every piece of content that is uploaded before it goes live.

Myth No. 4: Artificial Intelligence (AI) can moderate content on its own.

Automated systems using AI and machine learning are certainly doing quite a bit to help with this unfathomably enormous task. They act as triage systems, for example, by pushing suspect content to human moderators and weeding out some unwanted material on their own. However, AI cannot solve the online content moderation problem without human help.

AI either uses visual recognition to identify a relatively broad category of objectionable content, or it matches content to an index of banned items. The latter approach is used in cases of obvious illicit material, such as terrorist content or child sexual abuse imagery. In these cases, content is given a “hash,” or an ID, so that if it is detected again, the uploading process will be disabled. Regardless of which method is used, the parameters must be set by human beings.

What’s more, while these AI-driven processes are mostly reliable, problems can arise. As Tarleton Gillespsie writes in “Custodians of the Internet,” “Automated detection is just not an easy task — arguably it’s an impossible one, given that offense depends so critically on both interpretation and context.” Indeed, AI has trouble looking for context and understanding certain dynamics, such as the varying legal regimes in different countries. AI also lacks the ability to account for the constant changes in how humans classify and define problematic content. These complexities make it difficult for people to moderate content, so how can we expect a machine, which is programmed by humans, to get the job done?

This is not to say that there is no hope for AI. Yet for the moment, it will remain a complement to human-driven content moderation as opposed to a replacement.

Myth No. 5: A removal of Section 230 will make for better moderation practices.

Under Section 230 of the Communications Decency Act, websites and internet service providers are not liable for the comments, pictures and videos that their users and subscribers post, no matter how bad they are (with certain exceptions). By providing this immunity, the hope was that companies would be free to adopt basic conduct codes and delete material that the companies deemed inappropriate. The law also prevents platforms from being held liable for good faith actions to block “obscene, lewd, lascivious, filthy, excessively violent, harassing or otherwise objectionable material.” In fact, Section 230 can be thought of as giving birth to, and making possible, content moderation.

If lawmakers rescind Section 230 protection, tech companies will be open to a lawsuit every time a moderator decides to remove content or leave it on the platform. As a result, these companies would be forced to do one of two things: Either allow everything that anyone posts to remain — including horrific content like terrorist execution videos, Ku Klux Klan propaganda, etc. — or only allow approved content to be published. Given the astronomical amount of content uploaded to platforms each day (see Myth No. 3), many companies would likely opt to allow the vilest content to remain on their platforms rather than risking the myriad lawsuits and fines that could easily put them out of business.

In short, removal of Section 230 immunity does not make for more “fair” moderation practices; it removes the incentive to moderate altogether.

Yet this past June, Sen. Josh Hawley unveiled a bill entitled the “Stop Internet Censorship Act.” Under the Hawley bill, the Federal Trade Commission would audit major platforms’ moderation practices every two years to determine whether those practices were “biased against a political party, political candidate or political viewpoint.” Platforms that are not able to satisfy this standard would lose their Section 230 immunity. This would not only eliminate the incentive for these platforms to moderate their users’ content, it would effectively grant the government control over online speech.

Conclusion

For years now, many have demanded that various internet platforms “do more” in relation to content moderation. In response, large tech companies have hired thousands of content moderators to do this work. These moderators must perform a complex balancing act: They must follow the law, keep users safe, protect free speech online, and ensure that the product still thrives in the marketplace. Doing so requires moderators drown themselves in a sea of beheading videos, rape videos, and crime scenes photos for hours on end, every day. Even in-house content moderators, with good pay, and good working conditions, are plagued by the content they see on a constant basis.

Many assume that large tech companies can easily hide the worst parts of humanity that find their way onto the internet. But there is no easy solution to what is happening online. With billions of users, there will never be enough moderators to make sure everything is checked. Legal complaints and methods for reporting abuse helps to narrow it down, but even so, the task is overwhelming. As someone who once did this work, it is frustrating to watch so many politicians demand that companies “do something” without realizing the complexities involved. What’s more, many proposed solutions will not work and would instead create harmful unintended consequences.

What is happening online is a reflection on our society. Tech companies — and content moderators in particular — cannot magically fix the evil found within humanity, nor can we prevent it from finding its way online.

Can improvements be made? Certainly. This is why I left the tech world to work in the policy space. I understand these issues thanks to my first-hand experience and I want to raise awareness and advocate for intelligent change. But I can’t do it on my own. Lawmakers and the public need to understand just what content moderation is, and the consequences of tinkering with it, before drawing conclusions or making demands. (Image credit: GaudiLab)

Get the latest in AI policy right in your inbox. Sign up for R Street’s newsletter today.

More artificial intelligence policy