Following an outpouring of sadness after the murder of Conservative MP Sir David Amess, his colleagues from across the Commons have been raising concerns for their own safety.
And one common thread has emerged – the amount of abuse politicians face online.
Home Secretary Priti Patel said the government’s Online Safety Bill would offer an opportunity for all MPs to come together to close “the corrosive space online where we see just dreadful behaviour”.
But what measures does the draft legislation provide to tackle the threats and abuse? And should more be added to address the toxicity politicians face on social media?
Why was the bill introduced?
Since the murder of Labour MP Jo Cox in 2016 up until 2020, the Parliamentary Liaison and Investigation Team – set up by the Metropolitan Police in the wake of her death – received 582 reports of malicious communications from MPs, and many more go unreported.
The Malicious Communications Act 1988 and the Communications Act 2003 already provide some tools for dealing with the perpetrators of online abuse.
But last year the Law Commission warned the laws were not fit for purpose, failing to properly criminalise some actions – like cyber-flashing – and over criminalising others, meaning freedom of speech was not properly protected.
The Commission’s recommendations to reform the law was welcomed by the government, which a year earlier had launched its own proposals for new laws to tackle online harms.
Come April 2021, draft legislation for the Online Safety Bill was published.
What is in the bill to tackle abuse of MPs?
The bill particularly focuses on how to protect children and young people online from a range of risks, such as grooming, revenge porn, hate speech, images of child abuse and posts relating to suicide and eating disorders.
But even in its draft form its 145 pages spread much wider, with propositions on how to address terrorism, disinformation, racist abuse, pornography and online scams.
Rather than just targeting individuals who make offensive posts, the bill plans to put the onus on those who own the platforms.
The law would see Ofcom appointed as the regulator of social media sites and force the companies to have a duty of care for their users – including protecting adults from legal but harmful content, such as abuse that doesn’t cross the criminality threshold.
Ofcom would issue codes of practice outlining the systems and processes that companies need to adopt in order to be compliant.
The government has promised to protect content defined as “democratically important”, meaning people would still be allowed to promote or oppose government policy or a political party ahead of a vote, election or referendum.
But if that overall duty of care wasn’t fulfilled, Ofcom could punish social media firms with fines up to £18m or 10% of annual turnover – whichever is higher.
The government said it would also have the power to block access to sites in the UK.
And it believes these measures would force companies into taking action to stamp out online abuse, including that directed at MPs.
But critics claim the bill would give too much power to US tech firms, “effectively outsourcing internet policing from the police, courts and Parliament to Silicon Valley”.
What do politicians think is missing?
Others want the bill to go further though, with a number of MPs calling for a measure to stop people being able to post on social media sites anonymously.
Sir David himself called for the practice to be stopped in a book he wrote last year, saying he had frequently being abused online by “ignorant cowards” who could remain anonymous.
He wrote: “The law in this regard needs to be changed and updated as a matter of urgency.”
While paying tribute to his colleague and friend, in the Commons fellow Tory MP Mark Francois echoed the call, saying the Online Safety Bill needed to be “toughened up”.
And he said “David’s law” should be introduced, “the essence of which would be that while people in public life must remain open to legitimate criticism, they can no longer be vilified or their families subject to the most horrendous abuse, especially from people who hide behind a cloak of anonymity with the connivance of the social media companies for profit”.
Labour’s Chris Bryant – who received a death threat in the days after Sir David’s murder – said he had thought many times about leaving politics over the abuse he gets online, and he too wants to get rid of anonymity online.
He told BBC News: “I think people sometimes write things that they would never dream of saying to somebody’s face or putting their name to. I would prefer [users to be verified].”
But others have warned about the consequences of such a ban.
Former Labour MP Ruth Smeeth – who now runs the Index on Censorship organisation – says online trolling and abuse “was a grim but far too normal part of public life”.
Ms Smeeth experienced a lot of online attacks herself when she represented Stoke-on-Trent, and recalls that in the summer of 2016 alone she received 25,000 pieces of anti-Semitic abuse.
But she says she wouldn’t support a ban. “We have to do something to tackle and improve our online culture but a knee-jerk response to ban anonymous accounts will have unintended consequences – not just on our collective free speech but on our ability to engage with whistle blowers and dissidents in every corner of the world.”
Executive director of the Open Rights Group, Jim Killock, agrees that anonymity was necessary for many people, such as members of the LGBT+ community, trades unionists and victims of domestic violence, who need to avoid exposure to people who would harm them.
He says “anything that reduces or removes their anonymity will impact on their ability to express themselves freely”.
Mr Killock adds that, in the vast majority of cases, people using social media accounts of any sort can be traced through existing police powers, whether using real names or anonymous, because they are linked to phone numbers, IP addresses and email addresses.
“It is unclear that this proposal would deliver any significant benefit, while it could harm any of the vulnerable people it seeks to protect,” he adds.
Tory MP and chair of the committee scrutinising the draft of the Online Safety Bill, Damian Collins, is angry about the growing amounts of abuse towards MPs, calling it a “sickness in British political debate”.
But he still doesn’t think anonymity should be taken away from social media users.
Mr Collins says: “If users exploit it to break hate-speech laws or those against incitement of violence, I think social media companies should have enough information on who they really are so that they are able to clearly identify them to the police.
“Not putting your real name to an account should give you no protection from an investigation into terrorist, racist, homophobic, or misogynistic messages you send.”
Will the bill work?
While the intentions of the draft bill, and of the MPs calling for additions, may be good, there is the question of practicality.
For example, principal analyst at tech research firm Freeform Dynamics, Bryan Betts, says there is little point in giving Ofcom powers if it did not have the capability to use them.
“Even the EU with all its resources is facing stern resistance from big tech [like social media companies],” he says.
“Could Ofcom’s legal department out-gun Facebook’s army of lawyers and lobbyists? Or one of the Chinese social media giants? It’s yet another area where we’re a lightweight player in a heavyweight match.”
Mr Betts also questions whether banning anonymous accounts would work, when huge amounts of abusive material is posted online quite openly. Indeed, it’s something Twitter itself pointed out after the racist abuse aimed at England’s football team during the World Cup.
The analyst said one option that might be worth examining was the verification method, where tech companies can prove it is a real person signing up for an account, but not identify them – something Mr Collins seemed to allude to.
“And preferably [that would be carried out] via some kind of secure independent broker,” adds Mr Betts.
“But that’s not going to satisfy [everyone] – and even if it did, it would make the identity broker a key target for political and judicial pressure, and of course for hackers and malware.”
The BBC understands that user verification will play a role in the government’s new framework and that Ofcom may require some platforms to take steps to manage who can access their services using that route.
But there are concerns in Whitehall that a full ban of anonymity would pose those security risks and restrict freedom of speech for those without ID.
A government spokesman said: “The Online Safety Bill will ensure there is no safe space for criminal content and when it does appear it will be removed quicker.
“The big social media companies will also need to keep their promises by enforcing their rules consistently and improving how users can report harmful content, or face being fined.
“Where abuse is illegal the police already have a range of legal powers to identify individuals who attempt to hide behind anonymity.”
Would banning anonymity help stop abuse of MPs?
Analysis by Shayan Sardarizadeh, disinformation specialist, BBC News
There is little evidence of a strong correlation between online anonymity and abuse, with studies showing many abusers have no qualms about using their real names online.
An investigation by Twitter following the abuse of three England footballers after the Euro 2020 final found that “99% of account owners were identifiable” and ID verification “would have been unlikely” to prevent the abuse.
And most users on Facebook, for instance, are active with their real names and are required to provide their phone numbers, yet abusive comments and trolling are widespread on the platform.
Ending anonymity would also be unwelcome for tens of millions of users in authoritarian states who rely on it to exercise their right to free speech. ID verification would inevitably lead to many either having to leave social media or risk persecution.
Some experts believe obliging platforms to change some of their algorithms would be a far more effective way of tackling online abuse.
Users with a penchant for consuming extremism can be radicalised further via recommendation algorithms, giving them content that reinforces their existing beliefs, thus creating echo chambers of like-minded users whose beliefs are rarely challenged by the content offered to them by platforms.
Major social networks tweaked their recommendations during the pandemic to reduce the spread of Covid misinformation, but critics say areas such as hateful or extreme content remain a concern.