Jmdajr

Member
Oct 25, 2017
14,542
I haven't used live chat since the orginal Xbox with Halo2. Holy Shit it was bad.

Party Chat was a savior for the Xbox.360.
 

senj

Member
Nov 6, 2017
4,567
looking forward to this failing in all kinds of non-obvious ways that people will hate
 

ShadowAUS

Member
Feb 20, 2019
2,140
Australia
"I was just talking about the unrest in a specific African country" - gamers, probably
Just for comedy sake, if someone uses this defence they must on the spot demonstrate the correct pronunciation of Niger in French and/or any of the local languages. Obviously still ban them, but I want to hear the attempts anyway.

As for the system, I think it's actually not a bad use for AI, as in this is a job that can't feasibly done purely by human power, and shouldn't be done by pure human power as that would be a mentally shit job.
A little dystopian? Maybe, but if anything encourages people to clean up their act it's the panopticon listening in on their hate speech.
 

SwampBastard

The Fallen
Nov 1, 2017
11,182
tay.ai-artificial-cha2se3t.jpg


"Listen here you faintcolt ****, you are now banned so stick your ******** **** in a blender, now **** ***. Have a lovely day, heil *****"

View: https://www.youtube.com/watch?v=gmhEXjM7iTw
 

Meg Cherry

Member
Oct 25, 2017
7,345
Seattle, WA
Glad we're now forcefeeding robots the racist diatribes of CoD players. This will absolutely give them a pleasant, optimistic view of humanity when the time comes for them to overthrow us. /s

Seriously though, this seems like an interesting system. Curious to see what the results are.
 

Soap

Member
Oct 27, 2017
15,509
Hopefully this stops false flagging. I once got a warning for text chat yet I have never used text chat.
 

artsi

Member
Oct 26, 2017
2,708
Finland
It seems like a good idea in theory against online harassment, but I'm against automated mass surveillance and I'm not going to cheer for it.

It's listening only to your online voice now, but it can do so much more.
 

SimplyComplex

Member
May 23, 2018
4,104
This will likely be a net positive for the game in general. Haven't used game chat in about a decade since party chat is just so much more effective.
 

Jakartalado

Member
Oct 27, 2017
2,313
São Paulo, Brazil
I'm pretty sure this AI only understands english and that's going to be a problem.
People speaking other than english might get false banned due to this AI.
Something like this can happen:
https://gamerant.com/apex-legends-japanese-players-ban-run-word/

Imagine if two Norwegian dudes are playing CoD and talking to each other. Then they say something that AI thinks is racist, even though is not. That could easily lead to false bans.
There's also chance that people with speech impediment might get false banned.

However it seems that current system is just flagging at the moment, but knowing activision they might eventually get rid of human moderators and automate their jobs with AI.

I don't know why people in this thread are saying "great" when the risk of players getting false banned is high.

Not really sure if we can expose AI topics around Era but Vertex AI from Google can distinguish a lot of languages with high accuracy. I already tested on my work with Portuguese and works really well.
 

Annihilo

Member
Sep 14, 2019
502
it shouldnt be legal to listen/record in game audio imo, though I admit using ai to do it if it was would be a clever loophole.

to me this is a real 'the road to hell to paved with good intentions' moment.
 

collige

Member
Oct 31, 2017
12,772
Good idea, assuming that there's a human actually listening to the clips and verifying the bans before they go through.

I'd also like to see some comparative quality benchmarks between languages, if they exist. There's no way the other 12 languages have as much training data as English and Spanish.

it says actions wont be taken instantaneously and are subject to review
It says they "may" be subject to review. The technology only flags voice clips as probably toxic, coming up with a moderation policy based on that data is up to Activision and they could easily just set it up to auto-ban someone who gets flagged X times.
 

Deleted member 3038

Oct 25, 2017
3,569
Yeah I'm good, Absolutely no way I can trust where they store that Data. I'll stick with talking outside of the game
 

BeI

Member
Dec 9, 2017
6,056
Imagine if it wasn't AI...

ad4.png

I wouldn't be surprised if some people would argue that this is stealing jobs that could go to people instead. Good to know a human will have to review the content that is automatically flagged. Sounds like it would be a nightmare having to just listen to all audio to see if something bad pops up with no automation.

That aside, I'm gonna have to make sure the people in my COD group know about this because they have some proper potty mouths. We mostly stick to Discord anyway, but sometimes they like to shout some British-favourite words at people.
 

Azai

Member
Jun 10, 2020
4,031
this will turn out great...

tons of false flags and people getting banned for shit talking with their friends and private and public lobbies.
 

Ambient80

The Fallen
Oct 25, 2017
4,698
Honestly? It's an interesting idea, especially since it reflexes to a human for confirmation. Not really possible to have humans moderating in real time constantly, so trying this out to see how it does is pretty neat. If it does a great job, then great! If it false flags, then humans will see that and they can try making adjustments. If it's just totally awful then they'll likely get rid of it after a year or two.

It feels like CoD has done more novel/interesting things to combat toxicity and/or cheating the last couple years than most online games.
 

collige

Member
Oct 31, 2017
12,772
it shouldnt be legal to listen/record in game audio imo, though I admit using ai to do it if it was would be a clever loophole.

to me this is a real 'the road to hell to paved with good intentions' moment.
I think this is very fair when it comes to stuff like party chats, but I wouldn't expect any sort of privacy in team chat of a public matchmaking lobby or a server hosted by the developers.

How many players play and use this feature? That sounds like it could be a lot of processing; whose paying for that?
I can understand banning text and report features etc but active voice moderation as a whole can't be cheap and it's Activision right?
A big selling point of the service is that they filter the audio data before processing to only find the relevant parts and cut down on costs.

Though they don't really say exactly how the triaging or voice analysis works, which makes me kinda sus. This is the real [citation needed] in all this, especially because different games have different codes of conduct and they're obviously not training a new model for each game type.

ToxMod's machine learning technology can understand emotion and nuance cues to help differentiate between friendly banter and genuine bad behavior.

I don't believe for a second this can be reliably automated in a real video game environment. It's like saying they've found a way to detect hard R's.

The reason why I'm not worried about the overall system is because it's going to be tuned to cut down on the amount of audio processing being done for cost reasons, which in turn will lower the false positive rate. It could still go to shit if they auto-ban people based on these results.