The news: Video platform TikTok has published a set of new, more detailed guidelines governing which videos will be deleted from the app. It says it will take down videos promoting terrorism, crime, violence, hate speech, or self-harm, for example.
The rules also ban “misleading information” that could cause harm to either an individual or the general public, going further than US competitors like Facebook, which have (controversially) tried to avoid making those sorts of judgments. TikTok also explicitly bans denying the reality of “well-documented and violent events” like the Holocaust, while Facebook permits it.
What’s next: Writing the policies is the easy part—enforcing them is much harder, and TikTok hasn’t provided much detail on how it does that. However, German publication Netzpolitik revealed leaked moderation guidelines at the end of last year, which showed how TikTok algorithmically suppresses certain videos from becoming popular by making them harder for users to find. Controversially, this included videos created by people with disabilities. TikTok also came in for criticism for banning a girl who had posted a video attacking the treatment of Uighur people by the Chinese state.
Plenty of wiggle room: In a blog post published yesterday, TikTok said the global guidelines “are the moderation policies TikTok’s regional and country teams localize and implement in accordance with local laws and norms,” which means it can choose what to suppress and what to promote country by country.