Twitter on Tuesday slapped a fact-check label on President Donald Trump’s tweets for the first time, a response to long-standing criticism that the company is too hands-off when it comes to policing misinformation and falsehoods from world leaders.
The move, which escalates tensions between Washington and Silicon Valley in an election year, was made in response to two Trump tweets over the past 24 hours. The tweets falsely claimed that mail-in ballots are fraudulent. Twitter’s label says, “Get the facts about mail-in ballots,” and redirects users to news articles about Trump’s unsubstantiated claim.
The tweets, said Twitter spokesperson Katie Rosborough, “contain potentially misleading information about voting processes and have been labeled to provide additional context around mail-in ballots.”
In a statement, Trump campaign manager Brad Parscale said, “We always knew that Silicon Valley would pull out all the stops to obstruct and interfere with President Trump getting his message through to voters. Partnering with the biased fake news media ‘fact checkers’ is only a smoke screen Twitter is using to try to lend their obvious political tactics some false credibility.”
For its 14-year existence, Twitter has allowed misinformation by world leaders and everyday citizens to spread virtually unchecked. Its leaders have long said users would engage in debate on the platform and correct false information on their own.
But Trump has made dozens of false claims on social media, particularly on his preferred medium of Twitter, and has criticized people in ways that critics have argued could violate company policies on harassment and bullying.
For example, Twitter’s actions come on a day when the platform faced a barrage of criticism over another set of Trump tweets. Earlier on Tuesday, the widower of a former staffer to Joe Scarborough, a former Republican congressman, asked Twitter chief executive Jack Dorsey to delete tweets by Trump furthering a baseless conspiracy theory about the staffer’s wife’s death. Those tweets are still up, a reflection of an approach to policing content that can appear inconsistent even as the companies have increased enforcement.
The company is debating whether to take action on the Scarborough tweets, said a person familiar with the discussions.
Its much larger rival Facebook, by contrast, launched a fact-checking program several years ago. Facebook, which has 2.6 billion users, funds an army of third-party fact checkers to investigate content, which then gets labeled on the site and demoted in its reach.
Twitter, which has about 330 million users, has not had the institutional will to engage fact checkers.
But Twitter has radically changed its approach during the novel-coronavirus pandemic. In March, the company revised its terms of service to say it would remove posts by anyone, even world leaders, if such posts went “against guidance from authoritative sources of global and public health information.” That includes comments claiming that social distancing is ineffective or that essential oils can be used to cure the virus.
Soon after, for the first time, Twitter applied the policy to world leaders, removing tweets by Brazil’s President Jair Bolsonaro and Venezuelan President Nicolas Maduro, saying the tweets about breaking social distancing orders and touting false cures had such potential for harm that labeling them would be insufficient.
This month, Twitter rolled out a new policy saying that it would label or provide warning messages about coronavirus-related misinformation, even when that information is not a direct contradiction of health authorities and does not violate the company’s policies. The company said at the time that it may expand the labels to other issue areas, such as other types of health-related hoaxes or other situations in which there is a risk of harm. Tuesday’s tweets on elections represent an expansion into a new area of election-related misinformation.
Trump posted the same content about mail-in ballots on Facebook, which did not respond to a question about whether it would label or remove it.
As a matter of policy, Twitter and other tech companies hold world leaders to different standards than everyday users. The content of world leaders is kept up by Facebook, Twitter, and YouTube, even when it violates company policies, a practice known as the “newsworthiness exemption.”
That policy has long been subject to criticism because comments by world leaders can have massive impacts on people’s behavior and have even greater potential to cause harm. Trump’s recent promotion of the drug hydroxychloroquine as an experimental treatment for covid-19, the disease caused by the novel coronavirus, caused prescriptions and drug sales to soar.
If Trump had instructed people to take the drug outright, the statement probably would have been taken down by Facebook or Twitter, according to people who work there who spoke on the condition of anonymity because they were not authorized to speculate publicly. Instead the president walked a fine line, promoting the benefits of the drug and saying he was taking it himself.
The World Health Organization has halted studies of the drug out of concern that it causes more harm than good.
In March, Twitter labeled a manipulated video of presumptive Democratic nominee Joe Biden that was retweeted by Trump. That same month, Facebook took down a misleading ad about the U.S. census., one of two times that Facebook has taken action against the Trump campaign.
– – –
The Washington Post’s Cat Zakrzewski contributed to this report.