Twitch is finally accept responsibility as a king-making micro-celebrity machine, not just as a service or a platform. Today, the Amazon-owned company announced an official and public policy to investigate serious indiscretions by streamers in real life, or on services like Discord or Twitter.
Last June, dozens of women approached with allegations of sexual misconduct against prominent video game streamers on Twitch. On Twitter and other social media, they shared heartbreaking experiences of streamers leveraging their relative reputation to push the boundaries, causing serious personal and professional harm. Twitch would eventually ban or suspend several accused streamers, two of which were “in partnership” or able to receive money through Twitch subscriptions. At the same time, Twitch’s #MeToo movement has prompted broader questions about the service’s accountability for the actions of its most visible users, both live and off-stream.
During the investigation of these problematic users, Twitch COO Sara Clemens recounts WIREDTwitch’s moderation and law enforcement teams have learned how difficult it is to review and make decisions based on user behavior in IRL or on other platforms like Discord. “We realized that not having a policy to look at out-of-service behavior created a threat vector to our community that we hadn’t addressed,” says Clemens. Today, Twitch is announcing its solution: an out of service policy. In partnership with a third-party law firm, Twitch will investigate reports of offenses such as sexual assault, extremist behavior, and threats of violence that occur outside the network.
“We’ve been working on this for a while,” says Clemens. “It is certainly an unexplored space.”
Twitch is at the forefront of helping ensure that not only content, but the people who create it, are safe for the community. (The policy applies to everyone: partners, affiliates and even relatively unknown steamers). For years, sites that support digital fame have banned users for off-platform indiscretions. In 2017, PayPal cut a gang of white supremacists. In 2018, Patreon removed anti-feminist YouTuber Carl Benjamin, known as Sargon from Akkad, for racist speech on YouTube. Meanwhile, sites that develop directly or rely on digital celebrities tend not to rigorously scrutinize their most famous or influential users, especially when those users relegate their problematic behavior to Discord servers or Internet platforms. industry players.
Although they never published an official policy, kings creation services like Twitch and YouTube have, in the past, misrepresented users who they believe harm their communities for things they’ve said or said. made elsewhere. In late 2020, YouTube announced that it had temporarily demonetized the NELK prank channel after creators launched ragers at Illinois State University when the social gathering limit was 10. These actions, and public statements about them, are the exception rather than the rule.
“Platforms sometimes have special mechanisms to bring this up,” says Kat Lo, moderation manager at Meedan, a nonprofit tech literacy company, referring to the hotlines that high-level users often have. with company employees. She says off-service moderation has been happening on the biggest platforms for at least five years. But in general, she says, companies don’t often advertise or formalize these processes. “Investigating off-platform behavior requires a great deal of investigative capacity, finding evidence that can be verifiable. It is difficult to standardize. “
Twitch in the second half of 2020 received 7.4 million user reports for “all types of violations” and acted on reports 1.1 million times, according to its recent transparency report. During that time, Twitch acted on 61,200 suspected cases of hateful conduct, sexual harassment and harassment. It’s a big weight. (Twitch acted on 67 terrorism cases and forwarded 16 cases to law enforcement). Although they make up a large portion of user reports, harassment and bullying are not included among the behaviors listed. Twitch will start investigating off-platform unless this also happens on Twitch. Off-duty behaviors that will trigger investigations include what the Twitch blog calls “serious offenses that pose a substantial risk to the safety of the community”: deadly violence and violent extremism, explicit and credible threats of mass violence, membership in a hate group, etc. While bullying and harassment is not included now, Twitch says its new policy is designed to evolve.