One of the most powerful methods of moderating online content is through artificial intelligence. Sophisticated AI models have been developed to identify specific harmful activities, like bullying and phishing, and can be deployed where users are likely to experience abuse. Trained to understand context and nuance, AI-enhanced content moderation is even capable of recognizing subtle methods of trolling and other disruptive behavior.
That said, while it’s easier for algorithms to pinpoint harmful content written in a single language, when expanding to a global audience that’s spread across multiple languages, the task gets trickier.
Consider the issues most often encountered by professional translators, which are often broken down into several categories like semantic, grammatical, syntactical, rhetorical, pragmatic, and cultural. From the placement of pronouns to the use of metaphors and idioms to the inclusion of sarcasm, each of these issues presents a different type of challenge requiring its own understanding and solution. As language becomes more complex, it can pose a greater challenge for linguists and their digital equivalents to decipher.
Even today’s popular translation applications can end up garbling a message’s original intention. Take a simple phrase: “You hurt me.” Translated from English to Spanish, the phrase becomes “me lastimaste.” But when “me lastimaste” is translated from Spanish back into English, the result is “yo lastimaste,” meaning, “I hurt you.” A simple translation of three words produced a phrase with — quite literally — the opposite meaning.
Even small mistakes in translation can seriously alter the original intention of a message. AI models tasked with content moderation must be equipped to handle such nuance and complexity in order to make accurate, precise judgments.