How much do you know about your online community?
How much do you not know? Are members following your guidelines? Their country’s laws? Are they creating quality experiences for themselves and for one another?
So, chances are, they’re not.
FEATURED BLOG ARTICLES
FEATURED MODELS FOR SOCIAL PLATFORMS
Social platforms use our hate speech model to detect attacks on race, religion, ethnic origin, national origin, sex, disability, sexual orientation, or gender identity.
Social platforms use our recruiting model to detect attempted recruitment efforts to join known harmful groups.
Social platforms use our self harm model to detect content that explicitly or suggestively exhibits thoughts of self harm or encourages others to harm themselves.
HIGHLIGHTED SPECTRUM CAPABILTIES
Real Time Recognition
Once a toxic post has gone viral, the speed and consistency of your response matters. Failure to respond appropriately can result in member and advertiser flight.
Our solution operates in real time so you can identify and eliminate toxic behaviors as they happen.
Talking about weed in a medical marijuana community is different from talking about it in a gardening community.
This is why our solution for social platforms includes creating a rich custom word embedding used to understand the language specific to each community.