The center of social life for many young people is no longer Instagram or TikTok. A quiet but significant migration has moved authentic social interaction to gaming platforms like Discord, Roblox, and Steam.
While public and political attention remains fixed on mainstream apps, the gaming-centric spaces have become the primary hubs for unfiltered conversation. This shift has also created a dangerous, largely unmonitored environment where extremism and exploitation can grow undetected.
A different kind of social space
Gaming platforms are fundamentally different from public-facing social media. Apps like TikTok are built for mass broadcast and viral content, but platforms like Discord are structured around smaller, closed communities. They function more like digital clubhouses than public stages. This design encourages a different kind of interaction, one that prioritizes group discussion over creating a personal brand.
Anonymity is central to this environment. Users operate under pseudo-identities, which removes many of the real-world consequences of expressing radical or taboo ideas.
This has made these platforms a powerful draw for young people seeking connection without public scrutiny.
An unmonitored breeding ground for harm
The privacy that attracts users also provides a sanctuary for predators and extremist groups that have been banned from mainstream sites.
Mariana Olaizola Rosenblat, a policy advisor at NYU Stern, states that these groups specifically target gaming platforms to find and influence young people.
“Extremists and predators go to these gaming spaces to find highly-engaged, susceptible young people, many of whom are yearning for connection.”
This harmful activity is extremely difficult to track. The conversations happen in thousands of small, private chat rooms that are inaccessible to researchers. Users often mask dangerous ideologies with “gamespeak” or in-jokes, making moderation challenging. “Most researchers are basically blind to all of this,” Rosenblat says. “You can’t enter these rooms.”
A documented pattern of real-world consequences
While most user activity on these platforms is harmless, a pattern of radicalization and exploitation has led to documented, real-world tragedies. The theoretical risks are now proven facts.
- Discord was used to organize the 2017 “Unite the Right” rally in Charlottesville and was where the 2022 Buffalo shooter documented his plans for months in a private chat. The suspect in the recent murder of activist Charlie Kirk also appeared to confess in a Discord chat.
- Roblox is facing multiple lawsuits for failing to protect children from predators. One lawsuit from an Iowa family alleges their 13-year-old daughter was kidnapped and trafficked by a man she met on the platform. Roblox told Axios it “invest[s] significant resources in advanced safety technology.”
- Twitch and Steam have also been identified as platforms where extremism has found a foothold. The Buffalo shooting was livestreamed on Twitch, and researchers have documented how Steam has become a hub for far-right groups to connect.
How moderation struggles to scale
The companies behind these platforms face a structural problem. Their systems are built for rapid, peer-to-peer interaction, not for top-down oversight. Discord hosts millions of private servers, many created and abandoned in days or weeks.
Roblox generates vast amounts of user-made content each hour. Traditional moderation tools—keyword filters, reactive reporting, and small safety teams—cannot keep up with the scale or the speed at which harmful communities evolve.
Several firms have introduced AI-driven detection tools and community guidelines, but these efforts are fragmented and opaque. Safety researchers note that companies rarely disclose how many moderators they employ, how algorithms are tuned, or what enforcement outcomes look like. This lack of transparency makes it difficult for lawmakers, parents, or academics to assess whether interventions work.
Closing the research gap
Unlike Facebook or X (formerly Twitter), where public posts can be scraped and analyzed, the private nature of gaming platforms blocks outside audits. Independent researchers often cannot study real interactions, leaving safety debates dependent on leaks, lawsuits, or whistleblowers.
Without more open access, policymakers risk acting on anecdote rather than evidence.
Some experts propose creating privacy-preserving data portals that allow vetted researchers to study harmful trends without exposing user identities. Others argue for mandatory safety reporting—similar to food-safety or workplace-safety standards—that would require companies to publish metrics on abuse reports, moderator staffing, and child-protection outcomes.
Building safer social commons
Young people are unlikely to abandon these spaces. They offer belonging, creative expression, and real friendships that mainstream networks increasingly fail to provide. The challenge is not to shut them down but to treat them as the new social commons they have become—spaces that require rules, stewardship, and accountability.
Practical steps could include age-appropriate design standards, stronger parental controls, clearer pathways for law enforcement requests, and independent audits of safety tools. Gaming platforms might also collaborate on shared threat databases or rapid-response protocols for extremist content, much as financial institutions share data to combat fraud.
A turning point for online safety
The October 8 hearings will test whether lawmakers understand the scope of the problem and can move beyond symbolic questioning of tech executives. Without stronger standards, the same factors that make gaming platforms appealing—community, anonymity, and creative freedom—will continue to make them attractive to those who seek to harm.
Recognizing these spaces as the real social hubs of today’s youth is the first step toward governing them with the seriousness they demand.