Whether an online community is an integral part of your brand or something you set up as a way of allowing customers to communicate with you and each other, a strong moderation policy is vital to ensure that your community stays healthy (and that you avoid any potential liability). So, how do you set up such a policy?
Determining a Good Set of Community Guidelines
The first thing you need to do is put together your community guidelines. These should be clearly posted in a location where people cannot help but see them. The exact details of your guidelines may vary depending on the nature of the community, but here are some things to consider:
Clear, stated consequences. Your guidelines must include the consequences of violations, which can range from deletion of the offending content all the way through to a permanent ban. If running your own community, ensure you choose code which allows for suspensions of any time.
A name policy. Depending on the nature of your community, you may or may not want to require that users use their real name. Here's an example of a good real name policy. Alternatively, for some communities, you may allow or even encourage the use of aliases. For example, if you sell video games that are marketed to minors, you might even require that users do not reveal their legal name.
Content guidelines. For almost all communities, you should have rules against the posting of adult material, profanity, and hate speech including the use of slurs. Modern AI moderation can filter out bad language, and produces less false positives than in the past. You should also disallow personal attacks on other users.
The content guidelines here are pretty solid. This particular community disallows personal attacks, trolling, posting personal information or private communication, spam, bumping threads (posting just to get them to the top), advertising, hijacking (intentional off-topic posts), cross posting, using bots and fake obituaries.
Another set of guidelines bans the posting of links without context. This is also a good idea as bots will often do this. Your guidelines should be clear and straightforward and allow as little space as possible for subjective interpretation. There will always be some.
Guidelines against abusing whatever code you are using. For example, slack allows auto-responders, which can easily be abused to spam a chat into unusability. Make sure that you check for ways the system can be abused and keep abreast of security issues.
Empower users and moderators. If you have moderators, then they should be listed, for the sake of transparency. Some sites, on the other hand, find it is better to have moderators be anonymous. Make sure that your guidelines are written in a way which is encouraging and empowering, and don't sound like they were written by lawyers. You want to make sure they are read and you want your customers to feel some ownership in the community.
Enforcing Community Guidelines
Enforcing your community guidelines has two levels to it: Human and AI moderation.
For small communities, human moderation is sufficient. However, as your community grows, your moderators will start to have problems keeping up. At this point, strong AI moderation becomes necessary. You should avoid solutions which simply blacklist words (which are notorious for false positives), and go for a context-based, customized program which understands your community and covers the problems you are experiencing.
AI moderation is not a substitution for your moderation team, especially as people want the human touch, but is a tool that can help them keep up with the fast-paced, 24-hour model of communication and ensure the safety of your users.
Having and enforcing the right community guidelines is how you keep your company's online communities healthy and non-toxic. This in turn helps your customers feel safe and encourages them to stick around.