Social media platforms use algorithms that play to our need for stimulus, on our worries, our fears and, yes, our hates to keep us clicking so they can show us ads.
The arguments are in full swing. What constitutes hate speech?
Which Facebook or YouTube or Twitter post demeans or demonizes minorities or ethnic groups? What rules should there be for taking down pages and links – or for banning users?
And in the U.S., how can that be done without trampling First Amendment rights?
Changes will occur, and already have: Conspiracy-selling, far-right infowars have been banned from the major platforms. That’s the good news.
Most Read Opinion Stories
The dark side is that social media’s ability to spread hate, bias and vicious ideas that can and do lead to violence is not limited to – or even dependent on – particular sites or sites linked and forwarded from Facebook, YouTube, Twitter or Google. It’s built in. It lives in the algorithms the companies use to keep us on their pages, to keep us interested, to keep us excited and, of course, looking at the ads.
Following incidents of anti-immigrant violence, recent studies in Germany made clear that particular content pages alone don’t do the work. Once users landed on a page, the algorithms used by Facebook and YouTube that suggest related sites led steadily deeper into right-wing propaganda.
“The [Facebook’s] algorithm that determines each user’s news feed . . . is built around a core mission: promote content that will maximize user engagement. Posts that tap into negative, primal emotions like anger or fear, studies have found, perform best and so proliferate,” wrote New York Times writers Amanda Taub and Max Fisher in an article about attitudes toward refugees in Altena, Germany. Heavier Facebook users found themselves in a world of anti-immigrant posts. Thinking what they saw represented the majority view in their town, those users became more hardened in their views toward refugees. Residents less involved with social media retained a moderate or welcoming attitude toward immigrants.
That Facebook is clearly aware of its powers as a social engineer can be found in Evan Osnos’ profile of CEO Mark Zuckerberg and the company in a recent New Yorker article.
Something similar happened with YouTube when a murder allegedly committed by two immigrants sparked right-wing riots in Chemnitz, Germany. Within a couple of days, a video posted on YouTube by an obscure right-wing group, claiming falsely that the rioters had been Muslim refugees, had more views than any regular news video of the incidents.
“Researchers who study YouTube say the episode, far from being isolated, reflects the platform’s tendency to push everyday users toward politically extreme content — and, often, to keep them there,” wrote Fisher and Katrin Bennhold in The New York Times.
They point out that, “YouTube’s recommendation system is the core of its business strategy: Getting people to click on one more video means serving them more ads. The algorithm is sophisticated, constantly learning what keeps users engaged. And it is powerful. A high ranking from the algorithm can mean huge audiences for a video.”
And that’s where the problem lies. The social media platforms use algorithms that play to our need for stimulus, on our worries, our fears and, yes, our hates to keep us clicking so they can show us ads.
That’s the business plan. It works. It makes money. And that’s why any effective efforts to curtail social media’s negative impacts are a threat to Silicon’s Valley’s bottom line. Expect push back and more Silicon Valley money poured into lobbying.
To riff off the slogan of a presidential campaign from the last century: “It’s the algorithm, stupid.”