Twitter admitted this week that an algorithm behind its automatic photo cropping feature was biased, so it did away with it.

In a blog post Wednesday, the social networking platform said that it had analyzed the artificial intelligence algorithm that crops images before they appear in a user’s timeline after users complained last year the system emphasized white people over Black ones and favored men over women.

The test results are in, and the platform said the algorithm did in fact display “unequal treatment based on demographic differences.” It favored women more highly than men by 8 percent, and it favored photos of white people over Black individuals by 4 percent, Twitter said. Within those demographics, white women were more highly favored by 7 percent compared to Black women. White men were favored 2 percent more than Black men, the data showed.

The social media firm also checked to see if the AI showed signs of objectification bias, or focused more keenly on parts of women’s bodies.

“We didn’t find evidence of objectification bias — in other words, our algorithm did not crop images of men or women on areas other than their faces at a significant rate,” Rumman Chowdhury, Twitter’s director of software engineering, wrote in the post.

The thinking and methodology behind algorithms often influence what we see oneline and how often we see it. The systems determine which posts get removed and which ones get to live on, seemingly forever.


It’s common for external AI researchers to conduct algorithm audits, and publish their findings. But the information from Twitter offers a rare and detailed acknowledgment from a social media platform of just how unfair its automatic systems might be. People already knew there was something wrong, but Twitter’s offering its own insight into the problem and its own findings was, at a minimum, unusual.

“I was pleasantly surprised to see this level of transparency,” said Casey Fiesler, assistant professor of technology ethics in the department of Information Science at the University of Colorado Boulder. “It not only made their decision process transparent, but the contribution to the science is helpful for other people and companies thinking about these things.”

Facebook and Instagram have auto-cropping features, too. When a photo appears on your Facebook feed, you have to tap it to see the full image. Instagram reduces full-sized images to squares on people’s profiles. Facebook did not immediately respond to a request for comment on how that feature works.

Twitter says it started using what it called the “saliency algorithm” in 2018 to keep the size of photos consistent across the platform.

The software was designed to estimate which part of a photo would be considered most “salient” or important to see first and was trained with human eye-tracking data, Twitter said. After scanning an image, the algorithm predicts and then scores which areas of a picture are more likely to get attention from users. Then, the part of the image with the highest score becomes the center of the tech-generated crop, Twitter said.

The tool allowed a user to see the full-sized photo, by tapping on it to expand it and expose the parts that were hidden by the AI. But it was the reduced-size image that was the subject of user complaints.


Twitter said it stopped cropping standard-sized images on its mobile app as “a direct result of the feedback people shared with us last year that the way our algorithm cropped images wasn’t equitable.” The algorithm is still at play on

“Even if the saliency algorithm were adjusted to reflect perfect equality across race and gender subgroups, we’re concerned by the representational harm of the automated algorithm when people aren’t allowed to represent themselves as they wish on the platform,” Twitter says.

The company says it has come to realize that “how to crop an image is a decision best made by people.”

The situation is the latest example of how algorithmic and machine learning biases can get baked into widespread technology. Humans are inherently flawed and hold judgments that can, knowingly or not, show up as the AI behind decision-making products.

For example, in 2019, researchers found that AI used on more than 200 million people in U.S. hospitals falsely concluded that Black patients were healthier than equally sick white patients. That same year, a facial recognition study showed that the tech sometimes used by law enforcement misidentified people of color more often than white people.