clock menu more-arrow no yes mobile

Filed under:

Intel responds to hate speech tool getting roasted by the internet

People can’t stop talking about Intel’s new software, Bleep

Ana Diaz (she/her) is a culture writer at Polygon, covering internet culture, fandom, and video games. Her work has previously appeared at NPR, Wired, and The Verge.

Have you ever wanted to censor a little hate speech while playing a game, but not all of it? Thanks to Intel’s Bleep, a software that uses AI to censor voice chat, you can.

Bleep was developed in partnership with a company called Spirit AI, and is currently in beta following a prototype developed two years ago. It uses AI to censor hate speech in real time during gameplay. The software “bleeps” out offending language (hence the name). The most recent iteration of the tech was shown off during an event highlighting Intel’s latest developments. During this presentation, Roger Chandler, vice president and general manager of client XPU products and solutions, positioned the company as “stewards” of PC gaming who feel some responsibility in moving the platform forward and “making gaming better.”

Intel spoke to gamers about their needs, the spokesperson said, which included address what the company called “gaming’s dark side”: online toxicity.

“Across the board, and across the globe, players raised concerns about witnessing and experiencing toxicity,” he said, before sharing some statistics on how often gamers experience harassment online. According to the Anti-Defamation League, “22% of gamers have quit playing certain games as a result of these negative experiences.”

To address the problem, Intel created Bleep. And while the program isn’t new, it became the center of attention when stills from the 40-minute video presentation on it went viral Wednesday. The screenshot depicts the user settings for the software and shows a sliding scale where people can choose between “none, some, most, or all” of categories of hate speech like “racism and xenophobia” or “misogyny.” There’s also a toggle for the N-word.

“The intent of this has always been to put that that nuanced control in the hands in the users,” Marcus Kennedy, general manager of Intel’s gaming division, told Polygon over video chat. As Kennedy explained it to Polygon, Intel intended for those sliders to give players options, depending on the situation. Certain kinds of shit talk might be acceptable, even playful, when shared between friends, but might not be acceptable when it’s a stranger shouting at you.

When asked if the difference between the “none, some, most or all” slider categories, Kim Pallister, general manager of Intel’s gaming solutions team, said it’s “complicated.”

“If you had a profanity filter, with sensitivity and someone said ‘fudge’ and word clipped off briefly, the max slider would bleep that,” Pallister said, offering a hypothetical example.

Intel also clarified that the technology was not final, and could change between now and release. Still, the idea that people would be OK with some, but not a lot of hate speech came off as absurd to people online. So, as a result, people are now making a ton of memes and jokes that belittle the menu settings. One tweet jokes, “computer, today i feel like being a little bit misogynistic.”

The social media snafu is unfortunate, given that Bleep could actually be a helpful piece of technology in the future for those who are constantly on the receiving end of hateful comments. Intel acknowledges at the end of the presentation that, “while solutions like Bleep don’t erase the problem, we believe it’s a step in the right direction.”

Speaking to Polygon, Kennedy suggested that a screenshot may not capture the experience of using the product.

“I think before seeing the reactions to the video, our plan all along was to learn from the users of the application — what’s working what’s not working,” Pallister said. “So some of the reaction that we saw that isn’t based on using the app, it’s based on screenshots they saw in the background of the [keynote] that we did recently.

“Some of it’s fair inquiry, some of it’s like, ‘hey, if you use the thing, you’d probably see it’s a little bit different.’ But we’re gonna learn from all of those sources and the goal is to really to give users control and choice and see what works, and adapt accordingly.”