TikTok takes extra steps to curb dangerous challenges

Spread the love

 

TikTok is trying to strengthen the detection and enforcement of rules against dangerous online challenges and hoaxes.

Just over one in five teenagers has participated in an online challenge, a survey commissioned by TikTok suggests.

But only one in 50 has taken part in a “risky and dangerous” – and fewer than one in 300 a “really dangerous” – one.

The survey looked at teenagers’ broad online experience, without focusing on any one platform.

‘Skull-breaker’ challenge

There has been widespread concern about the proliferation, across various platforms, of potentially harmful online challenges.

Last year, the “skull-breaker” challenge, shared on TikTok, was linked to injuries.

And this year, doctors warned of the risk to life and limb of the “milk-crate challenge“, which invited the foolhardy to climb pyramids of milk crates.

But online challenges can also be positive and promote worthwhile causes, experts note, such as the “ice-bucket challenge”, which helped raise awareness of amyotrophic lateral sclerosis (ALS).

Violating content

The independent report, Exploring effective prevention education responses to dangerous online challenges, TikTok commissioned draws on a survey of teachers, parents and 5,400 13- to 19-year-olds in the UK, the US, Germany, Australia, Italy, Brazil, Mexico, Indonesia, Vietnam and Argentina

In response to its findings, TikTok said technology that “alerts our safety teams to sudden increases in violating content linked to hashtags” would be expanded “to also capture potentially dangerous behaviour”.

For example, if a hashtag such as #foodchallenge normally used to share recipes suddenly saw a spike in interest apparently connected to videos breaking the company’s rules, the team would investigate.

TikTok already has a policy of removing content that “promotes or glorifies dangerous acts”.

Self-harm hoaxes

Experts contributing to the report noted: “Adolescence is a period that has always been associated with heightened risk-taking.”

But it comes at a time of heightened public debate about the impact of social media on teenagers, after whistleblower Frances Haugen revealed Facebook research into the effect Instagram had on their mental health.

The TikTok research also looked at suicide and self-harm hoaxes.

Some schools warned parents about Momo, for example, a sinister character with bulging eyes setting children dangerous “challenges” such as harming themselves.

But the survey indicates these can still affect children.

Alarmist warnings

Of those to have seen a hoax, 31% said it had had a negative impact – and 63% of those said this had been on their mental health

“Hoaxes like these often have similar characteristics – and in previous cases, false warnings have circulated suggesting that children were being encouraged to take part in ‘games’ which resulted in self-harm,” TikTok said.

“These hoaxes largely spread through warning messages encouraging others to alert as many people as possible to avoid perceived negative consequences.”

And as well as removing the hoaxes, it would now “start to remove alarmist warnings about them, as they could cause harm by treating the self-harm hoax as real”.

Heightened risk-taking

The report also highlighted previous research suggesting the number of searches for hoax challenges by children “peaked in a way that mirrored the media coverage and public comment”.

Calling for better “media guidelines on dangerous challenges and hoax challenges”, it suggested existing guidance on the reporting of suicide, followed by many media organisations, could be a model.

TikTok said it had worked “to develop a new resource for our Safety Centre dedicated to challenges and hoaxes” and sought expert advice to improve warning labels that appear to people who search Tiktok for content related to harmful challenges or hoaxes.

“A new prompt will encourage community members to visit our Safety Centre to learn more,” it said.

“And should people search for hoaxes linked to suicide or self-harm, we will now display additional resources in search.”


Spread the love