A new report shows how extremists use profiles, hashtags and other effects that violate the platform’s community guidelines.
Originally published by The 19th
This article has been updated.
Violent extremists, neo-Nazis and other white supremacist groups are able to easily spread racist, misogynistic and anti-LGBTQ+ content on TikTok that runs afoul of the social media platform’s own terms of service, according to new research by the Institute for Strategic Dialogue (ISD).
The ISD report examines how extremists use profiles, hashtags, music and other effects on TikTok. Researchers identified a sample of 1,030 videos from 491 accounts, or about eight hours of content, that seemingly violated TikTok’s community guidelines. At least 312 of those videos promoted white supremacy and 246 expressed support for organizations or individuals known to be extremists or terrorists. At least 58 videos included misogynist content and 90 had anti-LGBTQ+ sentiments, ISD found.
The TikTok content shows how white supremacist movements are often layered with elements of misogyny and anti-LGBTQ+ attitudes. Multiple videos in the sample were linked to the “Men Going Their Own Way” movement, which the Southern Poverty Law Center has categorized as a male supremacist group. Other videos used a white supremacist term to criticize women in mixed-race relationships. Some praised the mass shooter Elliot Rodger, who in 2014 killed six people near a California college campus and circulated a video and written manifesto saying that the attack was related to his hatred of women.
Videos with anti-LGBTQ+ content celebrated the persecution of gay people by authoritarian regimes and the suicides of transgender people.
The 19th was given a preview of the study’s misogyny-related components ahead of the report’s release on Tuesday by ISD, a nonprofit organization of researchers and policy experts that tracks extremism online and makes recommendations to governments and businesses in the United States and overseas.
ISD researcher Ciaran O’Connor called the report a first-of-its-kind examination of how TikTok is used to spread white supremacy, neo-Nazism or other forms of hate speech. The study concludes there is an “enforcement gap” at TikTok and O’Connor hopes the findings will “start a conversation about the access or lack of access that researchers have when it comes to evaluating and examining hate at scale” on the platform.
O’Connor said that it is difficult for researchers to do large-scale searches of TikTok content so he used a “snowball methodology” — manually searching 157 keywords that led to accounts sharing far-right views, then looking at the accounts to which they were linked — to settle on the sample of 1,030 videos that seemingly violated TikTok’s community guidelines.
TikTok’s guidelines state that the platform will remove content from terrorist or criminal organizations and individuals who “attack people based on protected characteristics” such as race, gender, gender identity or sexual orientation, and that: “We consider attacks to include actions that incite violence or hatred, dehumanize individuals or groups, or embrace a hateful ideology.”
News investigations have nevertheless revealed that TikTok is used by Islamic State militants and to promote neo-Nazism. While the platform has started releasing transparency reports with details about the content it has removed for violating its guidelines, it is not yet part of a consortium of tech giants such as Facebook, Twitter and YouTube involved in an industry anti-terrorism effort to collaboratively track and review content from white supremacists and far-right militia groups.
A TikTok spokesperson said the platform “categorically prohibits violent extremism and hateful behavior, and our dedicated team will remove any such content as it violates our policies and undermines the creative and joyful experience people expect on our platform. We greatly value our collaboration with ISD and others whose critical research on industry-wide challenges helps strengthen how we enforce our policies.”
ISD is recommending that TikTok improve its understanding of how creators spread extremist content and develop more nuanced policies that go beyond straight hashtag bans. The report also notes that TikTok’s interface is “severely limited in the data it provides to researchers or the public,” and suggests improvements to search functionality and greater transparency on how its algorithm works.