How much do you know about your online community?

How much do you not know? Are members following your guidelines? Their country’s laws? Are they creating quality experiences for themselves and for one another?

The Anti-Defamation League and the Pew Research Center found over 50% of Americans alone have experienced some form of online harassment. 37% have experienced severe online harassment.

So, chances are, they’re not.

Spectrum helps social platforms recognize and respond to toxic behavior harming their community.

 

FEATURED BLOG ARTICLES

 
Only Spectrum’s recognition and response capabilities can match the speed of a viral post.
— Josh Newman, CTO, Spectrum Labs
 

FEATURED MODELS FOR SOCIAL PLATFORMS

Spectrum+-+HateSpeech.jpg

Hate Speech

Social platforms use our hate speech model to detect attacks on race, religion, ethnic origin, national origin, sex, disability, sexual orientation, or gender identity.

Spectrum-Grooming-White+Nationalist.jpg

Recruitment

Social platforms use our recruiting model to detect attempted recruitment efforts to join known harmful groups.

self+harm.jpg

Self Harm

Social platforms use our self harm model to detect content that explicitly or suggestively exhibits thoughts of self harm or encourages others to harm themselves.

HIGHLIGHTED SPECTRUM CAPABILTIES

Clocks Real Time.jpg

Real Time Recognition

Once a toxic post has gone viral, the speed and consistency of your response matters. Failure to respond appropriately can result in member and advertiser flight.

Our solution operates in real time so you can identify and eliminate toxic behaviors as they happen.

context.jpg

Context Sensing

Talking about weed in a medical marijuana community is different from talking about it in a gardening community.  

This is why our solution for social platforms includes creating a rich custom word embedding used to understand the language specific to each community.