A Twitter image-cropping algorithm prefers to show faces that are slimmer, younger and with lighter skin, a researcher has found.
Bogdan Kulynyc won $3,500 (£2,530) in a Twitter-organised contest to find biases in its cropping algorithm.
Earlier this year, Twitter’s own research found the algorithm had a bias towards to cropping out black faces.
As a result the company revised how images were handled, saying cropping was best done by people.
The “saliency algorithm” decided how images would be cropped in Twitter previews, before being clicked on to open at full size.
But when two faces were in the same image, users discovered, the preview crop appeared to favour white faces, hiding the black faces until users clicked through.
The pattern held true for images of former US President Barack Obama and senator Mitch McConnnell – and for stock images of businessmen belonging to different ethnicities.
Twitter’s own subsequent analysis showed a “4% difference from demographic parity, in favour of white individuals”.
And director of software engineering Rumman Chowdhury said Twitter had concluded “how to crop an image is a decision best made by people”.
The “algorithmic-bias bounty competition” was launched in July – a reference to the widespread practice of companies offering “bug bounties” for researchers who find flaws in code – with the aim of uncovering other harmful biases.
And Mr Kulynyc, a graduate student at the Swiss Federal Institute of Technology in Lausanne’s Security and Privacy Engineering Laboratory, discovered the “saliency” of a face in an image could be increased – making it less likely to be hidden by the cropping algorithm – by “making the person’s skin lighter or warmer and smoother; and quite often changing the appearance to that of a younger, more slim, and more stereotypically feminine person”.
Awarding him first prize, Twitter said his discovery showed beauty filters could be used to game the algorithm and “how algorithmic models amplify real-world biases and societal expectations of beauty”.
1st place goes to @hiddenmarkov whose submission showcased how applying beauty filters could game the algorithm’s internal scoring model. This shows how algorithmic models amplify real-world biases and societal expectations of beauty.
— Twitter Engineering (@TwitterEng) August 9, 2021
Second prize went to Halt AI, a female-founded University of Toronto start-up Twitter said showed the algorithm could perpetuate marginalisation in the way images were cropped.
For example, “images of the elderly and disabled were further marginalised”, the company said.
Taraaz Research founder Roya Pakzad won third prize for an entry that showed the algorithm was more likely to crop out Arabic text than English in memes.