Online abuse is often painted as a problem that is confined to a few toxic (and highly active) abusers. But as the makers of League of Legends analyzed their data, they found the problem was much broader than that.
This process led them to a surprising insight—one that “shaped our entire approach to this problem,” says Jeffrey Lin, Riot’s lead designer of social systems, who spoke about the process at last year’s Game Developers Conference. “If we remove all toxic players from the game, do we solve the player behavior problem? We don’t.” That is, if you think most online abuse is hurled by a small group of maladapted trolls, you’re wrong. Riot found that persistently negative players were only responsible for roughly 13 percent of the game’s bad behavior. The other 87 percent was coming from players whose presence, most of the time, seemed to be generally inoffensive or even positive. These gamers were lashing out only occasionally, in isolated incidents—but their outbursts often snowballed through the community. Banning the worst trolls wouldn’t be enough to clean up League of Legends, Riot’s player behavior team realized. Nothing less than community-wide reforms could succeed. (Source)
See also Reducing Abuse on Twitter
Haters are persistent and scary. See The Stalkers of Jimmy Wales
People are more likely to abide by rulings that are not liked if the process to reach them was perceived as fair. See Fair Process Effect