Twitter’s algorithm favours right-leaning politics, research finds

Spread the love

 

Twitter amplifies tweets from right-leaning political parties and news outlets more than from the left, its own research suggests.

The social-media giant said it made the discovery while exploring how its algorithm recommends political content to users.

But it admitted it did not know why, saying that was a “more difficult question to answer”.

Twitter’s study examined tweets from political parties and users sharing content from news outlets in seven countries around the world: Canada, France, Germany, Japan, Spain, the UK, and the US.

It analysed millions of tweets sent between 1 April and 15 August 2020.

Researchers then used the data to see which tweets were being amplified more on an algorithmically ordered feed compared with a reverse-chronological feed, both of which users have an option of using.

They found that mainstream parties and outlets on the political right enjoyed higher levels of “algorithmic amplification” compared with their counterparts on the left.

Rumman Chowdhury, director of Twitter’s Meta (machine-learning, ethics, transparency, and accountability) team, said the company’s next step was to find out the reason behind the phenomenon.

“In six out of seven countries, tweets posted by political-right elected officials are algorithmically amplified more than the political left. Right-leaning news outlets… see greater amplification compared to left-leaning,” she said.

“Establishing why these observed patterns occur is a significantly more difficult question to answer and something Meta will examine.”

Researchers noted that the difference in amplification could be due to the “differing strategies” used by political parties to reach audiences on the platform.

They also said the findings did not suggest that its algorithms pushed “extreme ideologies more than mainstream political voices” – another common concern expressed by Twitter’s critics.

This is not the first time Twitter has highlighted apparent bias in its algorithm.

In April, the platform revealed that it was conducting a study to determine whether its algorithms contributed to “unintentional harms”.

In May, the company revealed that its automatic cropping of images had underlying issues that favoured white individuals over black people, and women over men.


Spread the love