TikTok removed nearly 7.3 million accounts suspected to belong to under-age children in the first quarter of this year.
The video-sharing platform said the profiles accounted for fewer than 1% of global users.
Children aged 13 and over are allowed to use the platform, which is highly popular with teenagers.
This is the first time TikTok has published such figures in a Community Guidelines Enforcement Report.
It said it hoped the detail about under-age users will “help the industry push forward when it comes to transparency and accountability around user safety”.
The report also said:
- 61,951,327 videos were removed for violating the app’s rules, fewer than 1% of all videos uploaded
- 82% of them were removed before being viewed, 91% before any user reported them, and 93% within 24 hours of being posted
- 1,921,900 ads were rejected for violating advertising policies and guidelines
- 11,149,514 accounts in total were removed for violating guidelines and terms of service.
TikTok emphasised that it has introduced several measures to protect teenagers on the platform, including limiting features like private messaging and live-streaming to users aged 16 and over.
Those under the age of 16 will also have their accounts automatically set to private – a feature introduced in January this year.
“To bring more visibility to the actions we take to protect minors, in this report we added the number of accounts removed for potentially belonging to an under-age person,” Cormac Keenan, head of trust and safety at TikTok, said.
“In order to continue strengthening our approach to keeping TikTok a place for people 13 and over, we aim to explore new technologies to help with the industry-wide challenge of age assurance.”
The Chinese app is popular with teenagers but concerns about under-age users are on the rise.
Data that was leaked to the New York Times last year suggested about a third of US users were aged 14 and under.
Under-age users
“One of the challenges TikTok faces that isn’t different to [other forms of] social media is verifying the age of users,” said Chris Stokel-Walker, author of TikTok Boom.
“For decades, it’s been possible to by-pass age verification checks simply by saying you’re older than you are, and inserting a fake date of birth.
“TikTok has some of the tech world’s most sophisticated computer vision technology, and with it, probably has the ability to spot with decent accuracy under-age users. But using such technology would require a lot of permissions that people may feel queasy about.”
It comes as TikTok is being sued over how it collects and uses children’s data.
The claim is being made on behalf of millions of children in the UK and EU who use the platform. The tech firm said the case was without merit and it would fight it.
In January, the Italian data privacy watchdog ordered TikTok to block under-age accounts, following the death of a 10-year-old girl who tried a viral challenge on the
TikTok’s rise and rise has been extraordinary to watch.
Not only is it one of the fastest growing apps ever, it’s probably the most scrutinised too.
Like all social networks, the company is forever toeing the line between attracting teenagers – the future of all platforms – and making sure they’re not too young.
The firm wouldn’t say how exactly it “knows” when a user is under 13, but the takedown number is impressive and I understand it’s a mixture of automatic machine processes and – interestingly – time-consuming and costly human moderation too.
Many experts say there’s no “silver bullet” to age moderation on social networks.
Full age verification is often discussed as being the answer, but it’s an undesirable barrier for tech firms and problematic for customers.
Not many parents or children would want to hand over passport details to tech giants, for example, if enforced age verification was implemented.