Ubisoft & Riot Are Working Together to Make Tools for Preventing Player Toxicity
Ubisoft & Riot Games are partnering for a research initiative looking into AI technology to moderate toxicity in gaming communities.
Ubisoft & Riot Games are partnering for research into AI-driven moderation tools to combat toxicity in video game communities. Toxicity in online games is an issue that gaming studios continue to struggle to address. It’s not through lack of trying, with major online games supporting both automated & customer service-based solutions. Automation can’t catch all issues & humans can’t oversee everything, necessitating innovation. Ubisoft & Riot are partnering for just that.
Riot’s interest in such a partnership is self-evident. League of Legends is one of the most popular online multiplayer games in the world. A 2019 report found that 75% of League of Legends players had experience harassment in the game. Tom Clancy’s Rainbow Six Siege, Trackmania, & The Division are just a few of Ubisoft’s successful multiplayer releases. Countering toxicity is in both companies’ interests.
Introducing Zero Harm in Comms, what Riot Games & Ubisoft describe as the first step in a cross-industry project to benefit everyone that plays video games. Zero Harm in Comms is a research project built around building a database of in-game data. This database will “train AI-based preemptive moderation tools.” The goal is to improve the automatic detection of harmful behavior & ultimately foster more positive communities. As Ubisoft says, “With more data, these systems can theoretically gain an underst&ing of nuance & context beyond words.”
Anyone reading that Ubisoft & Riot are putting together a database regarding player information would underst&ably have immediate concerns regarding privacy. Riot explains that data that would be used to identify an individual will be “removed before sharing.” The data, which is said to be primarily chat logs from Riot & Ubisoft games including League of Legends, will be “scrubbed clean” of any personal information. There is notably no word regarding whether there will be oversight of this process, which is certain to lead to skepticism regardless of what’s said now.
Work on the Zero Harm in Comms project has been ongoing for around six months now, with Ubisoft director of LaForge R&D Department Yves Jacquier & Riot head of tech research Wesley Kerr already collaborating. The pair plan to share the findings of the Zero Harm in Comms project with the industry at large in 2023
Few online video game fans would disagree that moderation efforts need to be improved. However, many of those video game players would say that automated systems are less of a priority than human customer service. Ubisoft & Riot have a lot to prove to gain player trust on this subject, particularly when the issue of player privacy is at stake, too. Two of the largest online gaming companies in the world are creating a database of player chats with a goal of improving automated moderation efforts, for better or worse.
MORE: Celebrating Assassin’s Creed Sisterhood One Year Later
Source link gamerant.com
#Ubisoft #Riot #Working #Tools #Preventing #Player #Toxicity