Social networks haven’t done enough to prevent manipulation of voters on their platforms in 2020, according to an overwhelming majority of tech experts surveyed by The Washington Post.
Companies including Facebook, Google’s YouTube and Twitter are under immense political pressure to fight disinformation after foreign interference in the last presidential election. But a whopping 89% of experts in The Technology 202 Network say their responses so far do not inspire confidence.
“Fighting misinformation and online voter manipulation is not a one-time effort, it is a continuous game of cat-and-mouse,” said Hadi Partovi, chief executive of Code.org and an early investor in Facebook, Uber and Airbnb. “The leaders of all the major online platforms would agree their job is far from ‘done.’ “
The Washington Post’s Technology 202 Network is a panel of more than 100 experts from across government, industry and the consumer advocacy community invited to vote in ongoing surveys on the most pressing issues in the field.
The responsibility to tackle disinformation and other threats shouldn’t just be on tech companies, some industry experts argued. “Social media companies should be doing more, but are we comfortable with [Facebook chief executive] Mark Zuckerberg or [Twitter’s] Jack Dorsey as the arbiters of truth?” said Glenn Kelman, the chief executive of the real estate service Redfin.
But Kelman says that policymakers aren’t really equipped to handle the problem, either. “We elect governments, not corporations, to regulate speech, run fair elections and deter foreign interference. But most government folks lack the technical expertise to regulate the internet, and many actually prefer a wide-open field for partisan warfare.”
Rep. Ro Khanna, D-Calif., who represents Silicon Valley, insists that “technology companies should be investing in authentication tools, both for users and for content, to ensure that the news Americans are seeing online this year is honest and real.” Yet Khanna also says it’s time for Congress to finally pass legislation to force companies to take action: “Congress should also provide a basic regulatory framework so that social media companies remove blatant disinformation and hate speech that goes viral from their platforms.”
Many experts said tech companies’ lack of transparency about interference attempts makes it virtually impossible for voters to know whether adversaries are trying to influence them.
“The first step in preventing manipulation for individuals is knowing that it may be afoot,” said Danielle Keats Citron, a Boston University law professor and 2019 MacArthur fellow, commonly known as a “genius grant” recipient. “In short, so much is hidden from voters that we cannot tell the extent of the manipulation, let alone what companies are doing about it.”
Stewart Butterfield, the chief executive of Slack, said he’s “skeptical” that tech companies have done enough – “but the truth is we have no real way to know.”
Tech companies have promised big investments to address potential foreign interference on their platforms, including rooting out fake accounts and labeling posts that fact-checkers have deemed false. But the Network experts say these changes are just the tip of the iceberg. “Small initiatives with great fanfare won’t do the job,” said Tom Wheeler, chairman of the Federal Communications Commission during the Obama administration.
The approach to tackling disinformation and other threats has been inconsistent across Big Tech, said Falon Fatemi, the chief executive of start-up Node.io. “While some companies have invested heavily in this area and taken a proactive approach to this challenge, it is evident that some have not yet employed all that artificial intelligence can do to combat this problem,” she wrote. “All tech companies have a moral, and furthermore, business rationale to do better here, and to employ the latest and greatest AI for good in this uncertain time.”
Some respondents singled out Facebook’s policies as being particularly problematic: The company has said it will permit politicians to lie in ads and allow campaigns to target narrow slices of voters with political messages based on highly personalized data amassed by the company. (Google has put tougher restrictions on how political ads can be targeted).
If Facebook wants to be serious about election integrity, it should follow Twitter’s lead and do away with political ads altogether, said Karla Monterroso, the head of Code 2040, a nonprofit group advocating for diversity in the industry. “Tying money and targeted ad data to freedom of speech is ridiculous,” Monterroso said. “No one is entitled to that amount of data tied to a microphone in exchange for money. Especially if they are spreading lies.”
“Facebook won’t even stop running political ads. It’s hard to fathom, and Zuckerberg’s reasoning is either based on extremely ulterior motives or is just hardheaded,” said Bradley Tusk, founder of Tusk Ventures.
Several Network participants noted that the companies don’t have the right incentives to fix the problems with disinformation because of their business models. “Truly fixing Facebook’s threat to free and fair elections would require fundamental changes to its business model, but Facebook is not going to risk its billion dollars a week in targeted digital advertising revenue unless it’s forced to do so,” said Sally Hubbard, director of enforcement strategy at Open Markets Institute.
“The actions of Facebook and Google since 2016 raise serious doubts about the sincerity of their commitment to protecting democracy,” said Roger McNamee, a Silicon Valley investor and author of “Zucked: Waking Up to the Facebook Catastrophe.” “Both companies employ business models and algorithms that amplify hate speech, disinformation, and conspiracy theories because such content is unusually profitable. Rather than compromise their business models, both companies have made only cosmetic changes to appease policymakers.”
And rampant disinformation on these platforms disproportionately affects minority groups, warned Rashad Robinson, the president of the civil rights group Color of Change. “Make no mistake, these platforms know they have a problem – they acknowledged it in 2016 and they are aware of it today,” he said. “Yet these companies are making an active choice not to do more to end misinformation online, and that choice will disproportionately harm the communities most in need of the resources determined by elections and the upcoming census count.”
Just 11% of The Network – which includes executives from most major social networks – said that the companies are doing enough.
Kevin Martin, Facebook’s vice president for public policy and FCC chairman during the George W. Bush administration, sought to highlight the company’s progress.
“We have made strides in improving our security efforts through massive investments in people and technology to increase transparency, combat abuse and protect election integrity across the world,” Martin said. “While we know our work will never be complete, we are committed to combating these threats.”
Jesse Blumenthal, vice president of technology and innovation policy for the Koch network, argued it’s up to each individual to discern fact from fiction. “Social media companies are in a ‘cat-and-mouse’ game with all sorts of bad actors. It is easy to focus only on the challenges that exist and lose sight of the ways that social media platforms can and do help bring important information to light,” he said. “Ultimately, it is up to each of us to sort true information from falsehoods. Individuals are responsible for their actions. No company can or should think for you.”