Skip to main content

It’s an unfortunate truth of the internet that no matter where you go or how hard you try, you simply can’t avoid people who just want to be nasty to you. Abusive and toxic behavior online has been around for as long as the internet, but video games can often bring out the worst of this. Be it because of the sense of competition or the inherent tribalism within gamer culture, certain games have a reputation for having extremely unfriendly player bases.

Many attempts have been made over the years to crack down on this with varying degrees of success, but now publishers Ubisoft and Riot Games are coming together to develop new technology that will go some way to solving the problem.

Their new research project, titled “Zero Harm in Comms” is a technological partnership with the goal of creating sophisticated AI systems that can detect and crack down on toxic behavior in in-game chats. Their hope is that they can create a cross-industry shared database that AIs can learn from, expanding their moderation potential.

Riot Games is already implementing a system like this. The recent League of Legends 2023 Preseason patch added a new AI system to moderate in-game chats. It believes that this AI will be able to differentiate between minorly disruptive behavior and more severe cases of abuse or toxicity.

Speaking about the initiative, Yves Jacquier, Exectutive Director at Ubisoft La Forge said, “Disruptive player behaviors is an issue that we take very seriously but also one that is very difficult to solve…Through this technological partnership with Riot Games, we are exploring how to better prevent in-game toxicity as designers of these environments with a direct link to our communities.”

If this technology proves to be successful, we could quickly see others get on board, including those outside of the video game industry. With Riot’s first shot at this already out in the wild, we will quickly get some idea of how realistic this partnership’s goals are.