Facial recognition technology has risen quickly into the public consciousness to become a top digital-privacy concern. Microsoft President Brad Smith adddressed the need to manage and regulate the technology in a blog post Friday.

Share story

Microsoft wants Congress to regulate facial recognition technology as concerns grow that it could be used to invade privacy and improperly monitor people.

Some civil-liberties groups and employees have called on tech companies to restrict the use of facial recognition, but Microsoft President Brad Smith wrote in a lengthy blog post Friday that the “only way” to manage and regulate the controversial technology is for government to do it.

Employees at Microsoft, Google and Amazon pressured each of their companies in recent weeks to cut ties with some government agencies that they believe are violating civil liberties.

Some Microsoft workers questioned whether the company was providing its facial recognition technology, Face API, to Immigration and Customs Enforcement (ICE) amid public outcry about the agency separating children from their migrant parents at the border. That’s not happening, Smith wrote in his blog, echoing the company’s previous statement that its contract with ICE involves supporting email and calendar systems.

Facial recognition technology – enabled by ubiquitous cameras and increasingly accurate image analysis software – has risen quickly into the public consciousness to become a top digital-privacy concern.  The technology has spurred worries that it could be used by governments to widely monitor people without their knowledge or consent.

Many people, including some employees, have called on the companies developing the technology to restrict  its use.

Smith wrote in his blog post that “While we appreciate that some people today are calling for tech companies to make these decisions – and we recognize a clear need for our own exercise of responsibility, as discussed further below – we believe this is an inadequate substitute for decision making by the public and its representatives in a democratic republic.”

Tech companies have long been inventing new systems and services faster than the government can anticipate and regulate them. That often means services are fairly mature and widely adopted before the government steps in. Even then, tech companies often resist attempts to regulate their products and services.

Some believe the industry needs to accept more responsibility for its inventions. Atti Riazi, chief information technology officer at the United Nations, told Bloomberg this spring that tech companies had a duty to consider unintended consequences. “You can’t just create and innovate without thinking,” she said.

Smith’s post addressed some things the company is doing to ensure that facial recognition is accurate and ethical, including “going more slowly” as it rolled out the technology.

“’Move fast and break things’ became something of a mantra in Silicon Valley earlier this decade,” Smith wrote. “But if we move too fast with facial recognition, we may find that people’s fundamental rights are being broken.”

Microsoft has turned down some customer requests for the service where it found there could be “greater human rights risks,” Smith said.

Microsoft also has established both an internal and external ethics panel for artificial intelligence, and published a short book with guidelines and goals for AI earlier this year.

Facial recognition technology has been developed by multiple companies and is used in everyday apps, such as photo organizers on phones, to categorize pictures by people’s faces or to suggest who to tag in a Facebook post. It is also starting to be used by law-enforcement agencies to catch criminals and find missing people.

Most Read Business Stories

Unlimited Digital Access. $1 for 4 weeks.

That law-enforcement use came under fire in late May when the American Civil Liberties Union (ACLU) criticized Amazon for selling its technology, Rekognition, to government entities, saying it could violate people’s civil liberties and be easily abused.

“People should be free to walk down the street without being watched by the government,” the ACLU wrote in a letter to Amazon CEO Jeff Bezos.

Surveillance worries are not unfounded. In China, facial recognition technology is widely used to identify people who have committed crimes. It is also used to display the faces of jaywalkers on big outdoor screens to embarrass them.

Barry Friedman, a professor at New York University School of Law, said the use of facial recognition is exactly the type of tool that the government should regulate because of the nature of the “controversial and complicated” technology. Friedman serves as director of the school’s Policing Project, which encourages police departments to be transparent and work with communities to make policies.

“This is one of those situations where if we don’t get a handle on it, it’s going to be hard to get the toothpaste back in the tube,” he said.

Complicating the concerns, the technology is far from perfect and can misidentify people. The mistakes are especially stark when the technology is identifying people of color. Microsoft acknowledged this bias last month, and said it had make improvements to its system but still had a ways to go before the technology could identify women with darker skin as accurately as it does men with lighter skin.

Employees of tech companies have been some of the most vocal when it comes to political and human-rights issues, such as rights for immigrants and LGBTQ people. Now they are speaking out about the technology their colleagues help develop.

In addition to Microsoft employees asking the company to cut ties with ICE, some Amazon employees called on the company last month to stop selling Rekognition to law-enforcement groups. Google employees pressured the company to end its artificial-intelligence contract with the Pentagon, until the tech giant announced it would not renew the deal.

If Congress does take on regulation of facial recognition, it will likely need help from the tech industry, said Eileen Donahoe, an adjunct professor at Stanford University and executive director of the university’s Global Digital Policy Incubator, which studies the impact of technology on human rights.

“The consequences of this technology for social democracy, for citizens liberties is just gigantic,” she said. “In a democracy, governments are the representatives of the people and need to be accountable.”

But the federal government has not shown great understanding of technology as a whole, she said, and tech companies will need to be part of the conversation.