How to Protect Children Who Use Online Gaming Platforms

Gaming_May 10
May 10, 2024

Studying extremism in video games led me to many alarming findings. By far the most disturbing was the young age of those who witness and participate in extremist rhetoric and behavior in online gaming spaces.

In a multinational survey our Center commissioned in 2023, 17% of gamers under 18 reported they had come across extremist statements or narratives — statements like “The white race is superior to other races” and “A particular ethnicity should be eliminated.” In addition, 15% said they had experienced violent threats or some other form of acute harassment.

Contrary to what one may expect, the story of radicalization in video games is often not about sly adults preying on children. Rather, radicalization frequently involves teenagers enticing younger children — some as young as six years old — to engage in progressively incendiary rhetoric and dangerous activity, through a process known as the “radicalization funnel.” This reality is illustrated in the case of Daniel Harris, a teenager in the UK who was radicalized online and whose extremist materials shared on gaming-adjacent platforms like Discord inspired other teenagers, including the perpetrator of the 2022 shooting in Buffalo, New York. The Harris case was recently the subject of a BBC radio documentary, “The Boys Are Not Alright.”

The pattern of children and teenagers being implicated in online extremism has prompted several concerned stakeholders, including the producer of the BBC documentary, to ask me: How can parents and caretakers shield children from the dangers of extremist recruitment, exploitation, and other harms online without constantly having to look over their shoulder?

In response to this question, I often fall back on the option of “parental controls,” which are useful but often rudimentary and easily circumventable settings that some (but not all) platforms offer. However, a recent technology came to my attention which holds more promise. Devised by a startup called Kidas, the technology consists of a scanning algorithm, installed alongside standard virus-scanning software, which monitors conversations on more than 200 video game platforms in addition to Discord, the popular gaming-adjacent platform.

When the scanning algorithm detects a potential harm emerging from communications on these platforms, it alerts parents in real time and shares a set of expert-informed recommendations on how to handle the threats. Parents have to take the initiative to buy and install the product — which is good in terms of ensuring consent for otherwise privacy-invasive software, but bad in terms of accessibility. Most parents are not aware that this product exists, and some may not be able to afford it.

Technological innovations such as Kidas’ software cannot solve the problem of online radicalization. But they can offer interim options for concerned parents who want to protect their children online.

Related

See all