Internet companies were built on a model in which people gave up their information for free services. Now, that idea is under siege.
SAN FRANCISCO — The contemporary internet was built on a bargain: Show us who you really are, and the digital world will be free to search or share.
People detailed their interests and obsessions on Facebook and Google, generating a river of data that could be collected and harnessed for advertising. The companies became very rich. Users seemed happy. Privacy was deemed obsolete, like bloodletting and milkmen.
Now, the consumer-surveillance model underlying Facebook and Google’s free services is under siege from users, regulators and legislators on both sides of the Atlantic. It amounts to a crisis for an industry that up until now had taken a reactive, whack-a-mole approach to problems like the spread of fraudulent news and misuse of personal data.
The recent revelation that Cambridge Analytica, a voter-profiling company that had worked with Donald Trump’s presidential campaign, harvested data from 50 million Facebook users raised the current uproar, even if the origins lie as far back as the 2016 election. There have been many months of allegations and arguments that the internet in general and social media in particular are pulling society down instead of lifting it up.
Most Read Nation & World Stories
- Hikers found dead on California trail spent final moments trying to save baby, report says
- The woman who could bring down Roe v. Wade
- Stuck jet stream, La Niña causing weird weather
- Fauci says early reports encouraging about omicron variant
- COVID kills a leading anti-vaccine televangelist; evangelicals don't want to talk about it
That has inspired debate about more restrictive futures for Facebook and Google. At the furthest extreme, some dream of the companies becoming public utilities. More benign business models that depend less on ads and more on subscriptions have been proposed, although it’s unclear why either company would abandon something that has made them so prosperous.
Congress might pass targeted legislation to restrict consumer data use in specific sectors, such as a Senate bill that would require increased transparency in online political advertising, said Daniel J. Weitzner, director of the Internet Policy Research Initiative at the Massachusetts Institute of Technology.
There are other avenues, said Jascha Kaykas-Wolff, chief marketing officer of Mozilla, the nonprofit organization behind the popular Firefox browser, including advertisers and large tech platforms collecting vastly less user data and still effectively customizing ads to consumers.
“They are just collecting all the data to try to find magic growth algorithms,” Kaykas-Wolff said of online marketers. Last week, Mozilla halted its ads on Facebook, saying the social network’s default privacy settings allowed access to too much data.
The greatest likelihood is that the internet companies, frightened by the tumult, will accept a few more rules and work a little harder for transparency. And there will be hearings on Capitol Hill.
What Europe is doing
The next chapter is also set to play out not in Washington but in Europe, where regulators have already cracked down on privacy violations and are examining the role of data in online advertising.
The Cambridge Analytica case, said Vera Jourova, the European Union (EU) commissioner for justice, consumers and gender equality, was not just a breach of private data. “This is much more serious, because here we witness the threat to democracy, to democratic plurality,” she said.
Although many people had a general understanding that free online services used their personal details to customize the ads they saw, the latest controversy exposed the machinery.
Consumers’ seemingly benign activities — their likes — could be used to covertly categorize and influence their behavior. And not just by unknown third parties. Facebook itself has worked directly with presidential campaigns on ad targeting, describing its services in a company case study as “influencing voters.”
“People are upset that their data may have been used to secretly influence 2016 voters,” said Alessandro Acquisti, a professor of information technology and public policy at Carnegie Mellon University in Pittsburgh. “If your personal information can help sway elections, which affects everyone’s life and societal well-being, maybe privacy does matter after all.”
In interviews, Mark Zuckerberg, Facebook’s chief executive, and Sheryl Sandberg, Facebook’s chief operating officer, seemed to accept the possibility of increased privacy regulation, something that would have been unlikely only a few months ago. But some trade-group executives also warned that any attempt to curb the use of consumer data would put the business model of the ad-supported internet at risk.
“You’re undermining a fundamental concept in advertising: reaching consumers who are interested in a particular product,” said Dean C. Garfield, chief executive of the Information Technology Industry Council, a trade group in Washington whose members include Amazon, Facebook, Google and Twitter.
If suspicion of Facebook and Google is a relatively new feeling in the United States, it has been embedded in Europe for historical and cultural reasons that date back to the Nazi Gestapo, the Soviet occupation of Eastern Europe and the Cold War.
“We’re at an inflection point, when the great wave of optimism about tech is giving way to growing alarm,” said Heather Grabbe, director of the Open Society European Policy Institute. “This is the moment when Europeans turn to the state for protection and answers and are less likely than Americans to rely on the market to sort out imbalances.”
In May, the EU is instituting a comprehensive new privacy law, the General Data Protection Regulation. The new rules treat personal data as proprietary, owned by an individual, and any use of that data must be accompanied by permission — opting in rather than opting out — after receiving a request written in clear language, not legalese.
Mélanie Voin, a spokeswoman for the European Commission, said the protection rules will have more teeth than the current 1995 directive. For example, a company experiencing a data breach involving individuals must notify the data-protection authority within 72 hours and would be subject to fines of up to 20 million euros, or 4 percent of its annual revenue.
Why laws fail in U.S.
The United States does not have a consumer-privacy law like the General Data Protection Regulation. But after years of pushing for similar legislation, privacy groups said recent events were giving them new momentum — and they were looking to Europe for inspiration.
“With the new European law, regulators for the first time have real enforcement tools,” said Jeffrey Chester, executive director of the Center for Digital Democracy, a nonprofit group in Washington. “We now have a way to hold these companies accountable.”
Any ambitions for new rules may run into the realities of the tech industry.
Privacy advocates and even some U.S. regulators have long been concerned about the ability of online services to track consumers and make inferences about their financial status, health concerns and other intimate details to show them behavior-based ads. They warned that such microtargeting could unfairly categorize or exclude certain people.
In 2010, for instance, the Federal Trade Commission proposed a new option for consumers, called Do Not Track. Two years later, the Obama administration introduced a blueprint for a Consumer Privacy Bill of Rights, intended to give Americans more control over what personal details companies collected from them and how the data was used.
But the Do Not Track effort and the privacy bill were both stymied.
Industry groups successfully argued that collecting personal details posed no harm to consumers and that efforts to hinder data collection would chill innovation. Instead, the advertising industry created a program to allow consumers to opt out of having their data used for customized ads, although it does not allow people to entirely opt out of having their data collected.
“If it can be shown that the current situation is actually a market failure and not an individual-company failure, then there’s a case to be made for federal regulation” under certain circumstances, said Randall Rothenberg, chief executive of the Interactive Advertising Bureau, a trade group.
The business practices of Facebook and Google were reinforced by the fact that no privacy flap lasted longer than a news cycle or two. Nor did people flee for other services. That convinced the companies that digital privacy was a dead issue.
If the current furor dies down without meaningful change, critics worry that the problems might become even more entrenched. When the tech industry follows its natural impulses, it becomes even less transparent.
There’s another reason Silicon Valley tends to be reluctant to share information about what it is doing. It believes so deeply in itself that it does not even think there is a need for discussion. The technology world’s remedy for any problem is always more technology.
“If Facebook and Google were merely interested in maximizing profits, we could regulate them,” said Maciej Ceglowski, who runs Tech Solidarity, a labor-advocacy group. “But well-intentioned people can break things not easy to fix. It’s like a child running a bulldozer. They don’t have any sense of the damage they can do.”