The question keeps me up at night, in something like terror.
Cameras are the defining technological advance of our age. They are the keys to our smartphones, the eyes of tomorrow’s autonomous drones and engines that drive Facebook, Instagram, TikTok, Snapchat and Pornhub. Cheap, ubiquitous, viral photography has fed social movements like Black Lives Matter, but cameras are already prompting more problems than we know what to do with – revenge porn, livestreamed terrorism, YouTube reactionaries and other photographic ills.
And cameras aren’t done. They keep getting cheaper and – in ways both amazing and alarming – they are getting smarter.
They see youAdvances in computer vision are giving machines the ability to distinguish and track faces, to make guesses about people’s behaviors and intentions, and to comprehend and navigate threats in the physical environment. In China, smart cameras sit at the foundation of an all-encompassing surveillance totalitarianism unprecedented in human history. In the West, intelligent cameras are now being sold as cheap solutions to nearly every private and public woe, from catching cheating spouses and package thieves to preventing school shootings and immigration violations. I suspect these and more uses will take off, because in my years of covering tech, I’ve gleaned one ironclad axiom about society: If you put a camera in it, it will sell.
That’s why I worry that we’re stumbling dumbly into a surveillance state. And it’s why I think the only reasonable thing to do about smart cameras now is to put a stop to them.
This week, San Francisco’s board of supervisors voted to ban the use of facial-recognition technology by the city’s police and other agencies. Oakland, California, and Berkeley, California, are also considering bans, as is the city of Somerville, Massachusetts. I’m hoping for a cascade. States, cities and the federal government should impose an immediate moratorium on facial recognition, especially its use by law enforcement agencies. We might still decide, at a later time, to give ourselves over to cameras everywhere. But let’s not jump into an all-seeing future without understanding the risks at hand.
What are the risks?Two new reports by Clare Garvie, a researcher who studies facial recognition at Georgetown Law, brought the dangers home for me. In one report – written with Laura Moy, executive director of Georgetown Law’s Center on Privacy & Technology – Garvie uncovered municipal contracts indicating that law enforcement agencies in Chicago, Detroit and several other cities are moving quickly, and with little public notice, to install Chinese-style “real time” facial recognition systems.
In Detroit, the researchers discovered that the city signed a $1 million deal with DataWorks Plus, a facial recognition vendor, for software that allows for continuous screening of hundreds of private and public cameras set up around the city – in gas stations, fast-food restaurants, churches, hotels, clinics, addiction treatment centers, affordable-housing apartments and schools. Faces caught by the cameras can be searched against Michigan’s driver’s license photo database. Researchers also obtained the Detroit Police Department’s rules governing how officers can use the system. The rules are broad, allowing police to scan faces “on live or recorded video” for a wide variety of reasons, including to “investigate and/or corroborate tips and leads.” In a letter to Garvie, James E. Craig, Detroit’s police chief, disputed any “Orwellian activities,” adding that he took “great umbrage” at the suggestion that police would “violate the rights of law-abiding citizens.”
Never beforeI’m less optimistic, and so is Garvie. “Face recognition gives law enforcement a unique ability that they’ve never had before,” Garvie told me. “That’s the ability to conduct biometric surveillance – the ability to see not just what is happening on the ground but who is doing it. This has never been possible before. We’ve never been able to take mass fingerprint scans of a group of people in secret. We’ve never been able to do that with DNA. Now we can with face scans.”
That ability alters how we should think about privacy in public spaces. It has chilling implications for speech and assembly protected by the First Amendment; it means that police can watch who participates in protests against the police and keep tabs on them afterward.
It’s already happeningIn 2015, when protests erupted in Baltimore over the death of Freddie Gray while in police custody, the Baltimore County Police Department used facial recognition software to find people in the crowd who had outstanding warrants – arresting them immediately, in the name of public safety.
But there’s another wrinkle in the debate over facial recognition. In a second report, Garvie found that for all their alleged power, face-scanning systems are being used by police in a rushed, sloppy way that should call into question their results.
Here’s one of the many crazy stories in Garvie’s report: In the spring of 2017, a man was caught on a security camera stealing beer from a CVS store in New York. But the camera didn’t get a good shot of the man, and the city’s face-scanning system returned no match.
Police, however, were undeterred. A detective in the New York Police Department’s facial recognition department thought the man in the pixelated CVS video looked like actor Woody Harrelson. So the detective went to Google Images, got a picture of the actor and ran his face through the face scanner. That produced a match, and the law made its move. A man was arrested for the crime not because he looked like the guy caught on tape but because Woody Harrelson did.
Devora Kaye, a spokeswoman for the New York Police Department, told me that the department uses facial recognition merely as an investigative lead and that “further investigation is always needed to develop probable cause to arrest.” She added that “the NYPD constantly reassesses our existing procedures and in line with that are in the process of reviewing our existent facial recognition protocols.”
Etch-a-SketchThis sort of sketchy search is routine in the face business. Face-scanning software sold to police allows for easy editing of input photos. To increase the hits they get on a photo, police are advised to replace people’s mouths, eyes and other facial features with model images pulled from Google. The software also allows for “3D modeling,” essentially using computer animation to rotate or otherwise change a face so that it can match a standard mug-shot photo.
In a bizarre twist, some police departments are even pushing the use of facial recognition on forensic sketches: They will search for real people’s faces based on artists’ renderings of an eyewitness account, a process riddled with the sort of human subjectivity that facial recognition was supposed to obviate.
The most troubling thing about all of this is that there are almost no rules governing its use. “If we were to find out that a fingerprint analyst were drawing in where he thought the missing lines of a fingerprint were, that would be grounds for a mistrial,” Garvie said.
Stop itBut people are being arrested, charged and convicted based on similar practices in face searches. And because there are no mandates about what defendants and their attorneys must be told about these searches, police are allowed to act with impunity.
None of this is to say that facial recognition should be banned forever. The technology may have some legitimate uses. But it also poses profound legal and ethical quandaries. What sort of rules should we impose on law enforcement’s use of facial recognition? What about on the use of smart cameras by our friends and neighbors, in their cars and doorbells? In short, who has the right to surveil others – and under what circumstances can you object?
It will take time and careful study to answer these questions. But we have time. There’s no need to rush into the unknown. Let’s stop using facial recognition immediately, at least until we figure out what is going on.
Farhad Manjoo is a columnist for The New York Times.