What are the real-world effects of content moderation?

Moderating online content has become a crucial game of cat-and-mouse to ensure that harmful imagery and speech doesn’t flourish in the digital space. Without effective methods to restrict or remove dangerous content, users are left to wade through an online wild west littered with bullies, extremists, propagandists, and other malicious users intent on causing serious harm to unsuspecting users. 

In the past few years, we’ve seen the well-publicized effects of platforms not having a sufficient means of content moderation:


Many of these platforms employ huge teams of content moderators, but the sheer amount of work is simply overwhelming – and often devastating.

In a chilling new report, moderators contracted by Facebook told stories of sifting through horrifying images of animal cruelty, child abuse, and pornography. Moderators have suffered from PTSD-like symptoms from being under immense pressure to assess vast amounts of content, with one said to have been pushed to the brink of death from the stress. Other companies who have employed online moderators have cycled through short-term contractors who experience frequent burnout.

The consequences of content moderation have startling real-world effects. Not only are human-curated approaches increasingly ineffective as the amount and scope of the problem grows, the very act of content moderation is causing serious harm to those involved. It’s an incredibly toxic double-edged sword – those tasked with cleaning up damaging content are themselves damaged in the process. 

There’s no use pretending that people’s worst online instincts will ever be eliminated. Controversial, shocking, and upsetting content inevitably will be created by those looking to cause harm, push an agenda, or sow discord. But to protect the vast majority of users who don’t follow those negative impulses, the reality of content moderation is that it’s a necessary evil.

Fortunately, artificial intelligence has risen to the challenge.

AI-enhanced content moderation tools are equipped to understand context and nuance, and can share the brunt of the moderation burden. By supplementing human moderators with technology that prevents or restricts harmful content from finding its way into their community, online platform companies can rest a little easier knowing they won’t have to be subjected to the worst instincts of the Internet.