Cory Doctorow’s sunglasses are seemingly ordinary. But they are far from it when seen on security footage, where his face is transformed into a glowing white orb.
At his local credit union, bemused tellers spot the curious sight on nearby monitors and sometimes ask, “What’s going on with your head?” said Doctorow, chuckling.
The frames of his sunglasses, from Chicago-based eyewear line Reflectacles, are made of a material that reflects the infrared light found in surveillance cameras and represents a fringe movement of privacy advocates experimenting with clothes, ornate makeup and accessories as a defense against some surveillance technologies.
Some wearers are propelled by the desire to opt out of what has been called “surveillance capitalism” — an economy that churns human experiences into data for profit — while others fear government invasion of privacy.
To be sure, “people have long been interested in technology that can make people invisible,” often in the form of masks, said Dave Maass, senior investigative researcher at San Francisco-based nonprofit Electronic Frontier Foundation. In response to the resurgence of the Ku Klux Klan between the 1920s and the 1950s, numerous states enacted anti-mask laws to prohibit groups of people from concealing their identities.
And Maass noticed an uptick in digital-surveillance countermeasures after former National Security Agency contractor Edward Snowden’s 2013 revelations about American surveillance programs around the world.
Today, artificial intelligence (AI) technology, such as facial recognition, has become more widespread in public and private spaces — including schools, retail stores, airports, concert venues and even to unlock the newest iPhones. Civil-liberty groups concerned about the potential for misuse have urged politicians to regulate the systems. A recent Washington Post investigation, for instance, revealed FBI and Immigration and Customs Enforcement agents used facial recognition to scan millions of Americans’ driver’s licenses without their knowledge to identify suspects and undocumented immigrants.
Researchers have long criticized the lack of oversight around AI, given its potential for bias. A recent National Institute of Standards and Technology study that looked at facial-recognition algorithms, including from Microsoft and Intel, showed Asian and Black people are up to 100 times more likely to be misidentified than white people. In situations where two different photos of a person are compared to confirm identity, such as when checking passports, the study found Native Americans are the least likely to be identified of all U.S. demographics. Images of Black women are more likely to be falsely matched with photos of others in an FBI database.
That study echoed past research that revealed Amazon’s facial-analysis system has higher error rates when identifying images of darker-skinned females in comparison to lighter-skinned males.
Daniel Castro, the vice president of nonprofit think tank Information Technology and Innovation Foundation, believes the error rates could be reduced by comparing images to a wider range of databases that are more diverse.
Facial recognition systems have proved effective in pursuing criminal investigation leads, he said, and are more accurate than humans at verifying people’s identities at border crossings. The development of policies and practices around the retention and usage of data could avoid government misuse, he said.
“The general use of this technology in the United States is very reasonable,” said Castro. “They’re being undertaken by police agencies that are trying to balance communities’ public safety interests with individual privacy.”
Still, in Doctorow’s eyes, the glasses serve as a conversation starter about the perils of granting governments and companies unbridled access to our personal data.
The motivation to seek out antidotes to an over-powerful force has political and symbolic significance for Doctorow, an L.A.-based science-fiction author and privacy advocate. His father’s family fled the Soviet Union, which used surveillance to control the masses.
“We are entirely too sanguine about the idea that surveillance technologies will be built by people we agree with for goals we are happy to support,” he said. “For this technology to be developed and for there to be no countermeasures is a road map to tyranny.”
Recent iterations of Reflectacles thwart some forms of 3D facial-recognition technology from identifying matches within a database, through special lenses that block infrared lights used to map people’s faces, said the glasses’ designer, Scott Urban.
The lenses of normal sunglasses become clear under any form of infrared light, but the special wavelength absorbers baked into Urban’s glasses soak up the light and turn them black.
Reflectacles’ absorbent quality makes them effective at blocking Face ID on the newest iPhones. While Urban said the glasses aren’t designed to evade facial recognition that doesn’t use infrared light, they will lessen the chance of a positive match in such systems.
A longtime privacy advocate, Urban has avoided the adoption of smart technologies that could store his personal information. “Me and my grandmother are the only two people left that don’t have a smartphone,” Urban said over his flip phone.
He believes there’s an appetite for discreet gear that maintains people’s anonymity, as shown by his Kickstarter campaign, backed by 311 people who pledged $41,315 after it launched in July.
Some of his customers have turned to Reflectacles as a safety measure. As conflict increased between pro-democracy protesters and police in Hong Kong over the summer, Urban said, orders spiked from activists in the area wanting to protect their identity.
Other forms of anti-surveillance camouflage include elaborate face paint that foils computer vision, such as the patterns designed by artist Adam Harvey. The characteristic black and white face paint worn by Juggalos, the die-hard fans of hip-hop duo Insane Clown Posse, can also block some facial-recognition systems.
Designer Leo Selvaggio sacrificed his own identity for the sake of obscuring others’ by creating a mask from a 3D scan of his face.
Hyper-realistic masks have been used for criminal purposes, such as by bank robbers seeking to conceal their identity and foil police investigations. A recent Cognitive Research: Principles And Implications study found that participants believed hyper-realistic masks were real faces 20 percent of the time.
Some gear demonstrates the faultiness of AI systems that are relied upon for investigations. L.A.-based cybersecurity analyst Kate Rose created her own fashion line called Adversarial Fashion to obfuscate automatic license-plate readers.
A clothes maker on the side, she imprinted stock images of out-of-use and fake license plates onto fabric to create shirts and dresses. When the wearers walk past the AI systems at traffic stops, the machines read the images on the clothes as plates, in turn feeding junk data into the technology.
For Rose, her line serves as a playful message that demonstrates “this technology is based on something that is quite easy to mess with,” she said.
Seattle has had its own issues around surveillance technology that led to the creation of the 2017 Seattle Surveillance Ordinance. The use of automatic license-plate readers by the Seattle Police Department (SPD) has drawn concern. In an impact assessment sent to the Seattle City Council last spring, an external working group criticized SPD’s ability to scan more than 13.5 million license plates a year of drivers not specifically suspected of crimes.
Despite the rise of the fashion trend, more individuals buy surveillance technology such as Amazon’s Ring to protect their property than they purchase anti-surveillance gear, said Castro: “The reason is because people have a sense of security in using this technology. They want to know if a crime is committed that they have some evidence, recourse and safety around their own properties.”
San Francisco, Berkeley and Oakland, California, as well as Somerville, Massachusetts, banned the use of facial-recognition technology by government agencies last year. This past fall, California outlawed the pairing of facial recognition and biometric scanning with police-worn body cameras for the next three years. Portland’s City Council is considering going a step further by prohibiting private businesses and governmental agencies from using the technology. At the congressional level, proposed legislation such as the Algorithmic Accountability Act and a resolution on creating guidelines around the ethical development of AI have signaled growing support for federal regulation.
More broadly, the technology is causing people to react. Last September, digital-rights group Fight for the Future launched a campaign to discourage festivals and venues from using facial-recognition technology to scan concertgoers. Burning Man, Coachella, Bumbershoot and Lollapalooza have vowed not to use the technology, according to the nonprofit.
While Fight for the Future deputy director Evan Greer salutes creative ways to avoid surveillance, she believes the onus should be on elected officials to regulate AI systems.
“People shouldn’t have to wear special glasses, jewelry or face masks when they leave their houses in order to stay safe or protect their basic civil liberties,” said Greer, adding that those who can’t afford the devices will be left vulnerable. “Members of the public need to fight to keep this technology out of our schools, airports and public places. We can’t give up now and literally throw a bag over our heads.”
What impact could artificial intelligence have on your work?
If you have questions about AI, ask us. Use the form below to share questions about the effects of artificial intelligence on your job.
The opinions expressed in reader comments are those of the author only and do not reflect the opinions of The Seattle Times.