An internal evaluation of Twitter’s recommendation algorithms concluded that they amplify right-leaning political content more than left-leaning content, company researchers announced Thursday, undercutting allegations by many conservatives who contend they are being censored on the platform.
Twitter researchers analyzed millions of 2020 tweets by elected officials in seven countries — Canada, France, Germany, Japan, Spain, Britain and United States — as well as posts that linked to political content from news outlets. Researchers relied on outside experts to determine what was right- or left-leaning rather than deciding for themselves.
“Our results reveal a remarkably consistent trend: In 6 out of 7 countries studied, the mainstream political right enjoys higher algorithmic amplification than the mainstream political left,” the researchers said in a 27-page report.
The research is months in the making, part of Twitter’s promise to evaluate the underpinnings of its platform after the company was criticized for its role in the Jan. 6 insurrection at the Capitol. In the weeks leading up to the riot, groundless theories and false claims about Joe Biden’s victory in the 2020 election swamped the site. Critics say President Donald Trump used the platform to stoke the anger of his supporters by claiming the election was “rigged.”
On Jan. 8, Trump was permanently banned from the platform because of the risk of “further incitement of violence” from his tweets, Twitter said. Unlike Trump’s ban from Facebook and YouTube, Twitter’s decision is permanent.
In response to concerns about the far-reaching impacts of its platform, Twitter in April announced its “Responsible Machine Learning Initiative,” driven by its ML Ethics, Transparency and Accountability (META) team, aimed at studying the “unintentional harms” caused by its product and making those findings public.
“When Twitter uses ML, it can impact hundreds of millions of Tweets per day,” the company said in a blog post announcing the effort. “Sometimes, the way a system was designed to help could start to behave differently than was intended. These subtle shifts can then start to impact the people using Twitter.”
In a blog post Thursday, Rumman Chowdhury, the head of Twitter’s META team, said that researchers set out to determine whether some political groups or news outlets were amplified more than others. While this study concluded that the answer is yes, figuring out why is the much bigger challenge.
Like all recommendation engines, Twitter’s algorithms aim to maximize engagement. Though a process of constant, kaleidoscopic trial-and-error, the algorithms have concluded that right-leaning content generates more engagement than left-leaning content. But explaining why has far more to do with human nature — and the way people engage with ideas they do or don’t agree with — than it does with coding.
The dynamic is not unique to Twitter: Though artificial intelligence is widely deployed in daily life, powering such applications as medical software, social media feeds and facial recognition, machine learning models often are referred to as “black boxes” given how difficult it is to interpret their decision-making processes. It’s so challenging that there is a vast field of industry research and funding efforts dedicated to making AI more “explainable.”
Chowdhury said the team would embark on “root cause analysis” to determine what changes, if any, are necessary to “reduce adverse impacts” from the home timeline algorithm. Twitter is sharing the research with outsiders and making aggregated data sets available for third-party researchers who want to reproduce the META team’s findings.
The research is part of the company’s efforts to make its internal data more accessible to outside sources, a move that could put pressure on other social media platforms like Facebook — which has garnered criticism for its refusal to share its own evaluations with the public — to ramp up transparency.
“Algorithmic amplification is not problematic by default — all algorithms amplify,” Chowdhury said in the blog post. “Algorithmic amplification is problematic if there is preferential treatment as a function of how the algorithm is constructed versus the interactions people have with it.”
Twitter’s finding stands in contrast to allegations of censorship from conservatives who have accused Twitter, Facebook and other platforms of silencing their voices. A raft of conservatives have been suspended or banned for violating engagement guidelines, many relating to misinformation.
Trump and his supporters have used social media as a vehicle to insist, without evidence, that the 2020 election was “rigged” and “stolen.” They have pushed to dismantle Section 230 of the Communications Decency Act, the federal law that shields platforms from liability stemming from user posts and content moderation decisions. Meanwhile, many liberals have criticized social media companies for not taking further steps to censor dangerous speech from conservatives on their platforms.
Trump’s banishment from the biggest platforms helped jump-start his plan to form Trump Media & Technology Group, which says it aims to rival “the liberal media consortium.” According to the company overview, it will include a Twitter-like social network, Truth Social, that will allow users to post “Truths” and “Re-Truths,” similar to tweets and retweets.
Within hours of the platform’s beta launch Wednesday night, pranksters found what appeared to be an unreleased test version and posted a picture of a defecating pig to the “donaldjtrump” account. The site has since been taken offline.