A ground-breaking code to create “a better internet for children” comes into force in the UK on Thursday – but critics say it is too broad and leaves many digital businesses unsure how to comply.
What is the Children’s Code?
The UK’s independent data authority, the Information Commissioner’s Office, introduced the Age Appropriate Design Code in September 2020, allowing companies a year to comply.
Without regulation the way in which social-media and gaming platforms and video- and music-streaming sites use and share children’s personal data could cause physical, emotional and financial harm, it said.
It had concerns around:
- inappropriate advertising
- tactics to keep children online for long periods of time, such as auto-playing another video on a website after one has finished
The code is unrelated to but has lots in common with the draft Online Safety Bill.
How will the code work?
Companies targeting children must:
- design services to be age appropriate and in their best interests
- consider whether their use of data keeps them safe from commercial and sexual exploitation
- provide a high level of privacy by default
- stop using design features that encourage them to provide more data
- switch off geo-location services that track where they are based
- map what personal data they collect from UK-based children
However, some organisations, including the Coalition for a Digital Economy, have said it is unclear what the data watchdog expects of businesses and called for a better definition of what will be within its scope.
How have technology companies responded?
A flurry of policy changes over the past few months suggest the social-media companies are taking the code seriously:
- YouTube will turn off default auto-play on videos and block ad targeting and personalisation for all children
- TikTok will stop sending notifications after 21:00 to 13- to 15-year-olds and 22:00 to 16- and 17-year-olds
- Instagram is preventing adults messaging children who do not follow them, defaulting all child accounts to private and requiring users to enter their date of birth to log in
How will the code be enforced?
Those found to be in breach of the code will be subject to the same potential penalties as those who fall foul of the General Data Protection Regulation, which include a fine of up to 4% of global turnover.
As with GDPR, there will be support rather than penalties at first – but the ICO has the power to investigate or audit organisations it believes are not complying.
It would expect companies to offer proof their services were designed in line with the code, ICO regulatory futures and innovation executive director Stephen Bonner blogged.
“Social-media platforms, video and music streaming sites and the gaming industry”, rather than more general retailers, would face the most scrutiny.
And the code could have “global influence”, with US Senate and Congress members calling on major technology companies to voluntarily adopt the same standards.
The Data Protection Commission in the Republic of Ireland is also preparing similar regulations.
Will companies need to know the age of users?
Despite age limits of 13, many social-media sites have much younger users.
But while age assurance will play a part in determining whether the code is being followed correctly, how they do this is being left up to companies.
The ICO will set out its position later in the autumn – but it does suggest some age-verification methods:
- use of artificial intelligence
- third-party age verification services
- technical measures
Rachel O’Connell, founder of TrustElevate, a platform designed to handle young people’s data, said: “The self-declaration of age means that these measures can be easily circumvented – and an unintended consequence could be incentivising young people to lie about their ages.”