LinkedIn, the social network best known for job-hunters and recruiters, is grappling with fake accounts, violent content and even child exploitation.

It’s pulling back the curtain for the first time on how it removes content that breaks its rules: And its transparency report, provided first to The Washington Post, makes clear the popular professional site is dealing with many of the same problems plaguing other social media companies.

LinkedIn took down more than 21 million fake accounts in the first half of the year, and it removed more than 60 million pieces of spam including fake job postings. It also took down more than 16,000 instances of harassment, 11,000 posts containing obscene or pornographic content, nearly 2,000 posts showing violence or terrorism and 22 occurrences of child exploitation.

“Unfortunately, some people will use technology in ways that it was never intended,” LinkedIn general counsel Blake Lawit told me in advance of the report’s release this morning. “So for us, we need to be vigilant and police it and take care of it, which is what we do.”

LinkedIn’s announcement shows how technology companies are heeding Washington’s calls for increased transparency about their decisions on content moderation in the wake of foreign interference that upended the 2016 election and terror attacks that originated online. Facebook began publicly reporting similar metrics last year, and this fall, it began reporting them for Instagram as well.

LinkedIn has only a fraction of as many takedowns as larger social networks such as Facebook, but the fact that it’s occurring on the service at all highlights the omnipresence of harmful content online.


“Any is too much,” Lawit said. “Part of being responsible, being accountable is providing transparency.”

It can be difficult to compare how companies stack up against each other in their efforts to combat violence, harassment and other harmful content, since the reports’ methodology is inconsistent from company to company. For instance, Facebook includes some categories that LinkedIn doesn’t, like removals of drugs or firearm sales, or instances of self-harm.

Twitter and Google do not report the same granular data as Facebook about the content they’re deciding to pull down. They report some broader categories: Twitter discloses instances of election interference and government requests for content removal. Google also reports government requests to delete content or instances where it pulls information down under European privacy law requests.

Facebook has criticized its tech peers for not being as transparent about these efforts, without directly naming rivals Twitter and Google.

“As a society we don’t know how much of this harmful content is out there and which companies are making progress,” Facebook CEO Mark Zuckerberg told reporters in a call about Facebook’s most recent content moderation report.

It can be risky for companies to disclose how much content they’re taking down because it can make it seem like their services host more harmful content than others. But Zuckerberg pushed back on that notion in the same call. “What it says is we’re working harder to identify this and take action on it,” he said.

LinkedIn was not under the same public pressure as Facebook on its content moderation efforts, but given the broader debate over the industry, it began seriously considering going public with its numbers last year, Lawit said.

“We’re at a point now where we recognize that we have a responsibility,” he said. “Part of that is to provide more transparency and I think that’s what led to the discussion and the action.”