GDC 2021: Intel and Spirit AI’s Bleep aims to filter live audio – and toxicity

    
9
Nothing to add.

At Intel’s GDC 2021 panel “Billions of Gamers, Thousands of Needs” this week, Intel and Spirit AI announced a new end-user application designed “to detect and redact audio” based on a list of preferences. Called Bleep, the application seems to be a kind of Windows overlay, though the details were scant and there was no actual demo of the product. We did get to see a bit of the UI, though, with the app apparently having options to moderate a broad range of toxic topics starting from general name-calling and aggression to LBGTQ+ hate and xenophobia.

Assuming there’s not much of an audio delay, a potential issue that came up several times just in the audio-to-text realm at GDC this summer, one major concern should be just how feasible this would be at launch, given Intel’s reference to the GDC 2019 presentation on using the tech just for detecting harmful speech.

If you’d like to watch the original discussion about this kind of tech from GDC 2019, the relevant section starts around the 18:40 mark, and about a minute later the speaker notes that technology alone probably isn’t enough to curtail this kind of thing, that incentives and enforcement may be a better method – the tech then was just meant for detection. From the above image, you can see that the proto-Bleep does seem to be able to detect genuine aggression, at least in some cases, as “I hate you Mario” passes but what looks like an excited rant triggers the filter.

But “I want all martians to die” does nothing to trigger the filter, as that could be perceived as threatening just with the word “die” but could also be lingo in a game for a certain faction, which could make that statement a threat toward martian players. Context is very important in speech, so detecting things like white nationalism (“milk” for literal product vs. coded speech), flirting (may set off filters even in consensual situations), friends smack-talking, and roleplaying the bad guy are all really hard for AI to comprehend. And that doesn’t even include multiple perspectives, such as if two people are “jokingly” racist and it offends a third party. The product is admittedly only supposed to be a step toward ending toxicity.

The demo UI we were shown this year has a long list of potential categories and even scales for some topics rather than a binary choice between off and on. However, without any kind of demonstration, it’s difficult to imagine how well Bleep is working in its current form. We reached out to Intel for more information, but were told there were no demo videos yet and that some may appear as the product enters beta ahead of its planned launch this year.

Advertisement

No posts to display

newest oldest most liked
Subscribe to:
Reader
Rndomuser

This will not work. The AI is way too limited now to be able to determine when, for example, someone uses word “trap” to insult transgender people or when someone is using it to describe something like a clever strategy used by someone else for the purpose of winning the game. Only way to deal with toxicity of voice chat is to hire enough human moderators who would be able to listen to several minutes of reported speech and make determination based on context of speech, as well as have an option to mute the player and add the player on ignore list which would prevent you from being placed into same game round when queuing for a game.

EmberStar
Reader
EmberStar

I still say that a decent first step would be: Stop turning voice chat on by default. And when someone is reported for bad behavior, they should *always* start muted, with a marker to indicate that it’s a “badmouth” mute. Other players can choose to turn them on, but get fair warning what kind of player they likely are. Quit giving an open intercom to the biggest trolls in the room. Take away the megaphone.

Reader
Arktouros

As someone who’s been around highly competitive, highly toxic environments for pretty much all my gaming time in the last 20 years I can tell you this isn’t going to work. Like it’ll work, for sure. Block them words. Get those racist insults and homophobic slurs out of the game. Fantastic.

However people and our use of language is just as adaptive as any AI and more importantly we’re smart enough to trick it. Any casual glance at articles regarding AI shows our capacity to overwhelm and brute force AI is pretty easy. I can think of any number of ways to insult people by their various traits without specifically referencing any of them. Is the game or AI going to end up banning most of the English language when it’s done? And that’s just language, you don’t even need language to be toxic. Oh the things I have done without even saying a single word…

I just don’t think there’s a real solution to this one that doesn’t come from “nurturing” the right attitude in people.

Reader
Elizabeth Stone

This is the truth. People are going to be more adaptive than the AI will be able keep up with. Unless the AI is intelligent enough to learn and extract meaning from context alone like a human mind does, then it’ll always just be behind.

Of course, even if they managed to get this to work they still are going to completely miss the real problems being presented here by these behaviors.

EmberStar
Reader
EmberStar

Even other humans have trouble dealing with slang. “Go frell the narfing frak.” That’s probably a swear of some kind… or is it? (Yes, it probably is.) But since there’s only two real words in it, how would it even be moderated? Nevermind “real” slang that’s in active use somewhere. Cool is hot, hot is sick, sick is rad, rad is good. (But only if you’re old enough to have watched Ninja Turtles as a kid. Which is everyone, because they reboot the turtles every couple of years.)

I played a game that hired a third party company to compile their list of “banned” words for their text chat. As near as we could tell, that company decided that “most words on the Urban Dictionary are slang for some kind of porfanity/obscenity. We’ll just import that and ban all the main entries, then edit from there.” Then they apparently forgot to do the second part. For a couple of days, the only thing that you could say that wasn’t replaced with #($)*#! in text was the word “the.”

Reader
Dominique Gagnon

Why would you want any combination of racism/xenophobia that doesn’t include the “N-word”? Why isn’t it simply included in the “Racism and xenophobia” setting?

Reader
Vanquesse V

considering the issue with machine learning is that it’s very easy to feed it biased info and end up with a heavily biased tool, having people split “the N-word” and xenophobia into separate categories does not inspire much hope in me

EmberStar
Reader
EmberStar

Plus there’s all the other cases where a word *can* be a racist label… or a perfectly reasonable word depending on context. “I like to put a saltine cracker in my soup” isn’t racist, but I’ve seen that word reliably blocked in most text chat because it *can* be used in a racist way. I can just imagine what Bleep moderated conversations will be like once it’s forced to compensate for every possible edge case.

“Hey Dom! That was a BLEEP BLEEP of BLEEP. Would you like to BLEEP with BLEEP later? We were thinking BLEEP pizza and BLEEP BLEEP, and maybe go watch Raya and the Last BLEEP afterwords. You up for it?”

MilitiaMasterV
Reader
MilitiaMasterV

Not a fan of censorship in most forms, but there are times I slip and swear (like a sailor) and I could do without most xenophobia/racism. I have noticed that most programs like this are nowadays being used to protect the perpetrators instead of the victims…I just got ‘hate speeched’ on FB for calling out my own race for something going on IRL…after a SENATOR made comments supporting lynching. It’s awful funny how people can get away with saying that in ‘hallowed halls’ of a place like that, but a piece of technology won’t let me reply in kind to them…