It’s also a problem that Riot Games has dedicated an entire team to for years. Kimberly Voll, a senior technical designer at Riot who works on trying to curb disruptive player behavior, told Polygon that it’s an ongoing conversation. Voll also cofounded the Fair Play Alliance, an industry-wide initiative that will allow companies to share their resources and information over player behavior. It’s an area she’s incredibly passionate about, and one that she hopes won’t even be a conversation in five years time because toxicity in games won’t be an issue.
For now, however, Voll recognizes that games are subject to the same toxicity that runs rampant on the general internet. There are people who want to play League of Legends just to troll and get a rise out of people. Streamers, like the infamous Tyler1, who received a permanent ban for more than 600 days before Riot Games lifted it, is one of the most notorious examples. Voll wouldn’t comment on Tyler1 specifically, but did speak about Riot’s current efforts in figuring out how to provide the most players with the safest, best gameplay experience they can right now.
“There’s a certain point at which we need to step back and go, ‘Well, that’s people. That’s humans.’ There’s always going to be the one percent that’s out there and disruptive,” Voll said. “We can’t ever totally exit those folks from the community. They’re kind of a normal part of a big online game, but we do think about them. Our first goal is to the player that’s being hurt by them. The problem with one disruptive player, and a severely disruptive player, is that they’re potentially affecting nine other people. It doesn’t take very long for that to explode.”
Voll refers to these players as the small sliver or the one percent of people who want to ruin the experience for other players. League of Legends has more than 100 million players; that’s the size of a few small countries. The team can’t expect to eradicate toxicity — or player disruptiveness, as Voll calls it — entirely. It’s an impossible feat. The focus is instead on incentivizing good behavior, providing rewards to people for not acting like jerks in a stressful situation. Riot Games refers to this as Honor, a system that was initially launched in 2012 but revamped in June 2017.
Think of it like classical conditioning; Pavlov’s dogs theory in action. If a player is rewarded with loot, items or cool aesthetic bonuses for acting decent during a stressful game, they’re more likely to continue that type of behavior. It’s the same reason startups try to gamify everything.
“Honor was a way of nudging players back to thinking about teams first,” Voll said. “Even if you’re a little cranky, can we start by saying ‘Hey, we’re all in this together on this team” and we don’t get to play this game without other human beings. Bot games are fun but they’re not that good.”
The idea of pairing negative players with bots to essentially shield other people from disruptive, destructive atmospheres is something the industry toyed with in 2015. Voll said shadowbotting was discussed at Riot Games in conjunction with a couple of other ideas that were thrown around the room when trying to come up with new ways to curb negative behavior, but their team decided against it.
“We talked about a bunch of different things like that,” Voll said. “One of our goals is players first mentality. When we started thinking about things from the root causes of the problem we realized ... you know, really bad things we want to shield our players from because that’s awful and no one should be subjected to that kind of harassment and hate speech. At least to the best that we can. It’s a hard problem. A lot of it is getting better habits into the community; looking into ways that frictions is causing some of those problems and realizing those mechanics of the game and then shifting them in the game.
“Our goal is really more one of helping players be better with one another versus getting them out of the game.”
Riot Games doesn’t want to permanently ban players — but the company is in a predicament. The strategy is to try and figure out a way to help those players who are causing problems for the community, but maintaining that any overtly disgusting and vulgar language is dealt with quickly and the appropriate consequences are doled out. Trying to govern 100 million people around the world isn’t an easy task, and it’s one Voll said the company thinks about all the time.
“We try to shield as best as we can, and figure out ways to minimize the hurt they’re causing to other players,” Voll said. “That’s mostly how we think about it. We talk a lot about being a multigenerational game. That means we have to think about these problems on a longer term scale, that we’re still figuring out in a way How do you as a game developer weigh the natural growth of any community of that one percent of folk who will grow away from that behavior and how do you focus on the growth of the benefit of the community? We’re still learning and figuring that out.”