Fortnite Quickly Removes Suicide Scene

Polygon reports Fortnight recently removed a fan-submitted scene after gamers complained it depicted suicide. While the creator denies claim and defends the submission, Fortnite responded, “This creator’s content was removed from the Block because it did not adhere to our content creation guidelines.”

Human moderators have a tough job. Every day they sift through offensive materials and make judgement calls. While these people have the best of intensions, humans are known to be biased and inconsistent in their judgements. One day a piece of material doesn’t look bad, but next day it looks terrible. Moderators powered by an AI solution could become more effective, efficient and consistent over time.

12 Games with Known Toxic Communities

Ranker released a list of 12 games with “aggressively toxic communities”. Here’s the rundown:

  • League of Legends

  • Call of Duty

  • Counter-Strike: Global Offensive

  • Dota 2

  • Overwatch

  • World of Tanks

  • ARK: Survival Evolved

  • World of Warcraft

  • Undertale

  • Minecraft

  • Dark Souls

  • Left 4 Dead

Ranker describes the toxicity specific to each - mostly, it’s racism, homophobia and misogyny. It also points out that many, not all, developers are trying to create systems to stop the behavior.

Imagine being a game developer. You’ve joined a company to build a great game, but you have to quickly switch gears to develop systems to combat toxic behavior. It isn’t exactly what you signed up for. Worse, your users are savvy, almost immediately finding holes in your system.

"Flop" Accounts Latest Way to Bully on Instagram

Sara Manavis writes about Flop accounts in an article for New Statesmen America. Flop accounts are fake accounts usually run by groups of teenagers dedicated to mocking, insulting, bullying and making fun of specific people or groups.

The anonymity provided by a flop account make the people behind them bold, resulting in out of control toxic behaviors.

It is troubling to see online bullying happen. It is scary to not know who is behind it.

Bouman Attacked Online After Achievement

The Guardian contributor Jill Filipovic calls our attention to the online harassment and abuse Dr Katie Bouman has experience since her monumental achievement photographing a black hole.

Filipovic recounts, “Trolls created fake social media accounts impersonating Bouman. They questioned her contribution to the project. When she said that she was part of a team who all worked hard to make the photo happen, they dug in deeper, suggesting she was only getting public attention because she was a woman, when men did all the real work.”

How do social platforms keep up when bad actors create fake accounts to attack people?

Twitter CEO Talks Harassment at Ted Talk

ABC News covered Jack Dorsey’s latest Ted Talk in which he addresses the platform’s problem with toxic behaviors like harassment and hate speech. Audience members and those streaming the talk lobbed tough questions at the CEO.

ABC News highlighted a tweet from The Guardian journalist Carole Cadwalladr,

"Ooh exciting. @jack is taking questions from Twitter users live at TED today. Anyone?? I’d like to know why a video that showed me being beaten up & threatened with a gun to soundtrack of Russian anthem stayed up for 72 hours despite 1000s of complaints, @jack? #AskJackAtTED,"

Clearly the platform is struggling to address toxicity at scale and consistently. It has publicly shared its interest in using AI to help.

White Nationalists Use Online Gaming to Recruit

Zach Beauchamp’s recent Vox article calls attention to a rising trend; white nationalist groups recruiting members within online gaming communities.

While online gaming actually attracts a diverse population, most people believe online games are played by socially awkward men in their mid teens to late 20s. It is that perception, the article asserts, that drew white nationalist groups into the fold; that age range and personality is their preferred target.

Exactly how does this recruiting happen? In game chat? In game forums? Recognizing and responding to these actions is difficult.

Online dating scam dupes man out of $143,000

Recently, CBC News Canada interviewed a man who had been defrauded $143,000 in an online dating scam. Reading through the article, you can see how sophisticated the effort was; it lasted years and included the creation of false pictures, bank statements and medical records.

There are many negative consequences when dating apps fail to provide members with safeguards against scams like this one. First and most importantly, members fall victim to crimes. Second, news coverage of scams push existing and future members away from the app, resulting in lost potential revenue.

If dating apps can’t guarantee member safety, then they won’t continue to exist.

'Cloaking' is the latest negative behavior in online dating

Fox News recently published an article about cloaking; the act of baiting someone for a date or meet up, failing to show and then “going dark” by deleting all trace of contact.

While from a distance this behavior simply seems childish, what happens when more and more members do it as a joke? Then your dating app develops a poor reputation.

More, what if cloaking is a gateway action to something more disturbing?