Exactly a year ago, Facebook’s chief executive, Mark Zuckerberg, testified before Congress and apologized for his company’s role in enabling “fake news, foreign interference in elections and hate speech.”
It was a memorable moment amid a broader reckoning that continues to inspire debate over how closely Facebook and other technology giants should be regulated.
As Silicon Valley grapples with its version of becoming too big to fail, Zuckerberg and his industry peers might take lessons from Wall Street, whose leaders have some experience with government scrutiny. (On Wednesday, bank chief executives were being grilled by Congress.)
Although it won’t address all of Big Tech’s problems, a simple rule that bolsters the banking system could do a lot to clean up some of the uglier aspects of social media that Zuckerberg felt compelled to apologize for.
The concept is “know your customer” — or KYC, as it’s called on Wall Street — and it’s straightforward: Given concerns about privacy, security and fraud when it comes to money, no bank is allowed to take on a new customer without verifying its existence and vetting its background.
The idea of applying such a rule to social media has been floated before, but it has so far failed to take hold. Now may be the right time.
Consider this: Facebook has said it shut down more than 1.5 billion fake accounts from April through September last year (yes, that’s a “B” in billion). That was up from the 1.3 billion such accounts it eliminated in the six previous months. To put those numbers in context, Facebook has a reported user base of 2.3 billion.
What if social media companies had to verify their users the same way banks do? You’d probably feel more confident that you were interacting with real people and were not just a target for malicious bots.
First, let’s acknowledge the practical considerations. Vetting the vast universe of those on social media would be a gargantuan task.
When I broached the idea of applying a “know your customer” principle to their business, several senior executives at social media companies recoiled at the prospect, questioning how they would pull off such a huge feat, especially in emerging markets where many people lack credit cards, and even fixed street addresses can be hard to come by.
Then there are the legitimate complaints about Facebook and its ilk already knowing too much about users. Who would want them to know even more? And what would the companies do to protect personal information better than they have in the past? After all, not long ago, Facebook disclosed that tens of millions of user passwords had not been stored securely.
But the stakes may be too high not to consider some kind of heightened verification process.
Facebook and Twitter, at least, clearly appreciate the importance of verification as a concept: Both offer blue-check-mark programs to confirm the authenticity of a small percentage of users, like celebrities.
If the vetting of legitimate users were expanded, and the number of phony ones were reduced, the amount of hate speech and fake news polluting the social media platforms would almost certainly dwindle. And it would be hard for the companies to willfully ignore what remained.
How would it work? A modified version of what goes on in the financial services industry is one possibility.
When you open a bank account, you typically have to provide your name, address, Social Security number and date of birth.
That information is cross-checked against databases to ensure that you’re a real person, that your credit score is solid and that your name doesn’t appear on a list of “politically exposed persons” that could put you at risk for bribery or corruption. The verification is also used to determine whether you have a criminal record that suggests possible money laundering or identity theft.
That kind of vetting helps protect banks from doing business with criminals and helps protect clients who could be vulnerable to fraud.
There is a precedent for adapting such a regimen for social media: NextDoor, a social network that helps people communicate in their local communities, won’t let new users sign up unless their addresses can be verified.
A would-be NextDoor user must submit a credit card or phone number, which the site cross-references against databases. If you don’t have either, the site sends a postcard to your address with a code you can use online.
The good news is that NextDoor knows its customers very well. The bad news is that its verification process would be extremely difficult to expand widely and quickly.
The company doesn’t disclose user numbers, only that it is in more than 200,000 neighborhoods. Reports suggest it has tens of millions of users, a far cry from the billions using more popular social networks. If introducing such a system at that scale is too daunting, testing it in the United States and Canada first might be one way to start.
The need for a “know your customer” rule could take on a new urgency as social networks evolve to become fully encrypted private-messaging services, as Zuckerberg has indicated he plans to do with Facebook.
“I believe the future of communication will increasingly shift to private, encrypted services where people can be confident what they say to each other stays secure and their messages and content won’t stick around forever,” he said.
It’s a laudable goal, but if the users are fake, or otherwise unaccountable, from the outset, the ease with which they could spread misinformation on encrypted networks would have even more troubling implications.
That is, unless, there was a viable mechanism for vouching for individual identities — for networks to know their customers, as it were.
It just seems like common sense.