Trash talking is one thing.
Harassment, hate speech, and misogyny is something else entirely. It doesn’t belong in your game play. Yet, it is happening.
Gamers joining the fun are subjected to virulent abuse.
Spectrum helps gaming companies create and maintain healthy communities
with a systematic, scalable approach to managing toxic behavior the delivers multiple benefits.
Spectrum’s classifiers help gaming companies recognize toxic behavior in real time across multiple languages.
Prioritize & Respond
Results are used to prioritize and automate responses.
FEATURED GAMING BLOG POSTS
FEATURED MODELS FOR GAMING
Gaming companies use our grooming model to detects content created and distributed to recruit membership or encourage underage relationships.
Our harassment model accurately detects aggressive pressure or intimidation within in-game chat and game forums.
Gaming companies use our hate speech model to detect attacks on race, religion, ethnic origin, national origin, sex, disability, sexual orientation, or gender identity.
Talking about a ‘mouse’ in an electronics setting is different than talking about a ‘mouse’ in a home setting.
This is why our solution for the gaming industry includes creating a rich custom word embedding used to understand the language specific to each game.
Training AI is difficult. You have to build, or buy, a complete unbiased training set in multiple languages. Most companies spend months doing this.
Our extensive library of trained models allow you to jumpstart your efforts and see value right away. They then go through constant iteration and are fine tuned to your specific guidelines and needs.