What does content moderation look like?

For any online community, there are several options for weeding out abusive posts or unwanted comments.

Here are 5 of the most well-known methods: 

  1. Pre-Moderation: Sends posts directly to a moderator before they’re approved for public view. After being submitted, content is typically placed in a queue behind other posts waiting to be approved or rejected by a content moderator. (This is often seen in Facebook groups where posts are “Pending moderator review” after submission.)

    While this method offers a high level of moderator control, it prevents users from being able to see their content immediately and can slow down a discussion – especially in large communities.

  2. Post-Moderation: Scrutinizes content after publication. Posts and comments are submitted by users directly to a site, then moderators review and remove objectionable content as quickly as possible.

    Adopting the post-moderation method allows sites to function at the speed that Internet users have come to expect. However, for sites with large amounts of content created every day, this can place a massive burden on moderators to remove infringing content in real-time. (See: Recent reports on overstressed Facebook moderators.)

  3. Reactive Moderation: This is the most commonly seen method on social networks, relying on users to flag unwelcome or spammy content. Instead of having moderators patrol comments sections and message boards, users effectively police their own content by reporting inappropriate items.

    Through reactive moderation, online communities can scale up without having to add bigger moderation teams – and it empowers users to speak up when they see harmful or inflammatory posts. Nevertheless, reactive moderation also risks leaving unwanted content on sites for long periods of time until a user actually flags the abusive post and the mods respond.

  4. Distributed Moderation: Relies on users to establish a ratings system that determines whether content meets community standards. Content is judged based on ratings or points assigned by users, and posts that fall below the approved threshold are designated for removal. (e.g. Reddit’s upvoting system for boosting posts’ visibility.)

    Distributed moderation is not a common method of content moderation because of its inherent complexity, but it’s a system that has promise if designed and scaled well.

  5. Automated Moderation: Employs tools to automatically detect and remove problematic posts. Once in place, automated moderation tools can filter out offensive words and phrases, ban users from posting based on IP addresses, or prevent certain content from being posted altogether.

    Automated tools won’t entirely replace the work of a human moderation team. But they can significantly reduce the amount of work that users or moderators have to do by harnessing automation to do the heavy lifting, then allowing mods to focus on higher level decision-making.


There’s still no silver bullet to address all content moderation needs. Many, a few, or even none of these methods may work for your particular platform – but it’s important to understand the differences between each type before implementing a strategy that best protects your users and community.