Once again, social-media companies are facing criticism as their platforms are used to racially abuse football players, following the dramatic conclusion of the Euro 2020 men’s tournament on Sunday night.
And make no mistake, it is becoming increasingly difficult for the technology giants to defend these incidents. How can they not be held more responsible for the content they share to millions?
In a nutshell, historically, in terms of regulation, social-media platforms have not been categorised as publishers or broadcasters in the same way as, say, traditional media such as the BBC.
If racist comments appeared below this article, written not by me but by someone who had read it, the BBC would be held to account and the UK regulator, Ofcom, would investigate, intervene and decide on a penalty, probably a fine.
But Ofcom does not yet have such powers over the likes of Facebook, TikTok, YouTube and Twitter, which have until now been largely self-regulating – although, that is coming, as part of the long-anticipated Online Safety Bill.
Whether the threat of large fines is enough to focus the minds of these multi-million dollar businesses remains to be seen, however. But it is not just in the UK that regulation is planned.
In fairness, while the BBC does have a large global presence, it does not have to deal with anything like the volume of content and video, written and uploaded in real time by anybody and everybody, a platform such as Facebook, with its two billion users, does.
This sheer volume swamps the armies of human moderators employed by those platforms.
Some describe nightmare shifts sifting through the worst and most graphic content imaginable and then making decisions about what should be done with it.
And the solution these companies are all pouring countless time and money into is automation.
Algorithms trained to seek out offensive material before it is published, the blanket banning of incendiary (and illegal) hashtags, the use of techniques such as “hashing”, which create a kind of digital fingerprint of a video, and then block any content bearing the same marker, are already in regular use.
But so far, automation remains a bit of a blunt instrument.
It is not yet sophisticated enough to understand nuance, context, different cultural norms, crafty edits – and there would be very little support for an anonymous algorithm, built by an unelected US technology giant, effectively censoring Western people’s speech without these factors (China of course has its own state censorship and US social-media platforms are banned).
Here is an example – a friend last night reported an anonymous account that had posted an orangutan emoji beneath an Instagram post belonging to England’s Bukayo Saka.
Now, nobody is going to blanket-ban that emoji.
But in this context, and given Saka is a young black player whose penalty kick was one of those that failed in the deciding moments of the England v Italy Euro 2020 final, the intention is pretty clear.
The response my friend received to her complaint was: “Our technology has found that this probably doesn’t go against our community guidelines,” – although, it went on to add the automation “isn’t perfect”.
She has appealed against the decision but who knows when it will get a pair of human eyes on it?
On Twitter, meanwhile, a user was apparently able to use an extremely offensive racial slur in a tweet about a footballer, before deleting the message.
In a statement, Facebook, which owns Instagram, said it had “quickly removed comments” directed at players.
“No-one should have to experience racist abuse anywhere – and we don’t want it on Instagram,” it said.
Twitter’s response was similar – in 24 hours, it had removed 1,000 posts and blocked accounts sharing hateful content, using “a combination of machine learning based automation and human review”, it said.
Both companies also said they had tools and filters that could be activated to stop account holders from seeing abusive content – but this does not solve the problem of the content being shared in the first place.
“No one thing will fix this challenge overnight,” Facebook added.
And perhaps that is what really lies at the heart of all this – it is not necessarily the limitations of technology that are the issue but old-fashioned human nature.
Prime Minister Boris Johnson said, on Twitter ironically, those responsible for posting racist abuse should “be ashamed of themselves”.
This England team deserve to be lauded as heroes, not racially abused on social media.
Those responsible for this appalling abuse should be ashamed of themselves.
— Boris Johnson (@BorisJohnson) July 12, 2021
But for some people, there is something seemingly irresistible about hiding behind a keyboard and saying things they would in all likelihood never say out loud, in person.
Never before have people had access not only to those they want to berate but also a potentially enormous audience of those who will listen – and join in.
For some, it has proved a heady and intoxicating mix – and one that has quickly established itself as the norm.
I was talking to someone recently about something controversial that happened 20 years ago.
Her first question was about how social-media had responded.
I hesitated for a moment, wondering why I could not remember that detail – and then realised it was simply because it did not exist.