Until appropriate legal protections are adopted, communities should issue a moratorium on most uses of face recognition.

This article first appeared in The Los Angeles Times.

When is it appropriate for police to conduct a face recognition search? To figure out who’s who in a crowd of protesters? To monitor foot traffic in a high-crime neighborhood? To confirm the identity of a suspect — or a witness — caught on tape?

According to a new report by Georgetown Law’s Center on Privacy & Technology, these are questions very few police departments asked before widely deploying face recognition systems. And this “use first, worry about the consequences later” approach is undermining Americans’ right to privacy, free speech and assembly.

Consider some known uses of face recognition technology. In April, the Baltimore Police Department used it to locate, identify and arrest certain people protesting Freddie Gray’s death in police custody. In Los Angeles three years ago, the Los Angeles Police Departmentdeployed to undisclosed locations 16 wireless video cameras that can conduct real-time face recognition.

Until appropriate legal protections are adopted, communities should issue a moratorium on most uses of face recognition.

In Los Angeles County, face recognition systems are accessible by all local law enforcement agencies, including school police.

Today police can use almost any facial photograph — one snapped during a police stop, or copied from social media — and comb through massive databases of photos for an identification. In 26 states, police can submit searches against databases containing all driver’s license photos from that state. In several other jurisdictions, police already have, or are trying to obtain, video systems that scan faces and check them in real time for matches against a set of photos.

We don’t know exactly how widespread these systems already are — but it’s certainly more extensive than most people think. According to the Georgetown Law report, at least one in four U.S. law enforcement agencies has the ability to run face recognition searches. Photos of nearly half of all American adults are already included in law enforcement face recognition networks.

In San Diego County, for example, more than 800 officers from 28 local law enforcement agencies run an average of 560 face recognition searches each month. In Los Angeles County, face recognition systems are accessible by all local law enforcement agencies, including school police. In Florida, Maryland, Pennsylvania, Ohio and elsewhere — 26 states in total — police or the FBI can submit face recognition searches against that state’s driver’s license photos.

Face recognition technology fundamentally changes how law enforcement interacts with the public. It allows police to surreptitiously identify you from a distance without requiring consent or even the suspicion of wrongdoing. It lets police document not just whathappens at a protest or a rally, but who is there. It can facilitate police tracking of your whereabouts in real-time. Police generally cannot track your location without a court order — yet in many jurisdictions, there are no restrictions on police accomplishing these same ends using remote cameras and face recognition.

Not surprisingly, most police departments implemented this technology with little or no public discussion and few safeguards. Of the more than 50 law enforcement agencies surveyed about their use of face recognition technology in the Georgetown Law report, just four had issued public policies about its use. Of those, only San Diego had subjected its policy to review and approval by elected legislators.

This lack of transparency and avoidance of democratic oversight by police departments is already taking a toll. Now, instead of having a public debate on how face recognition technology can be used without infringing on civil liberties, we must consider how to rein in what are, in many cases, out-of-control surveillance systems.

One particular cause for alarm is evidence that this technology disproportionally affects people of color. A prominent 2012 study co-written by an FBI expert found that several leading face recognition algorithms were up to 10% less accurate on photos of African Americans. Combined with the overrepresentation of people of color in face recognition databases to begin with, that flaw could easily exacerbate the biases in policing that have given rise to protests around the country.

It’s time to press the pause button on the uncontrolled use of this technology by police so that state and local lawmakers can weigh whether face recognition technology is a net public good.

If they do choose to allow such searches, legislatures should pass comprehensive laws on face recognition. At a minimum, such laws should limit the technology’s use to protect all people’s constitutional rights. They should require individualized suspicion that someone has committed a crime. There should also be regular audits to guard against misuse, and public disclosure of the frequency and types of searches run.

Without these protections, face recognition technology imperils our rights to privacy, free speech and assembly — and those risks may not be borne equally. In a time when people are taking to the streets to voice opposition to policing practices, uncontrolled use of face recognition technology adds fuel to the fire.