If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net.
Several police departments in the U.S. are generating leads by feeding “all manner of ‘probe photos’” into facial recognition software, reported a study published on Thursday by the Georgetown Law Center on Privacy and technology.
Images such as CGI renderings, manipulated pictures and even celebrity photos are being submitted for comparison with police records and driving license databases.
The study comes in the wake of San Francisco’s Tuesday vote. SF voted to ban the use of facial recognition software by government and agencies, becoming the first U.S. city to pass such kind of legislation.
According to Clare Garvie, senior associate at the Georgetown Law Center and author of the research, facial recognition systems “threaten to fundamentally change the nature of our public spaces”. Garvie added that “in the absence of rules requiring agencies to publish when, how, and why they’re using this technology, we really have no sense [of how often law enforcement is relying on it]”.
The study mentions a few examples of how the police are using all sorts of images in their facial recognition searches. On April 2017 the New York Police Department was having trouble with a pixilated image of a minor offender that had been caught on camera stealing a beer.
The footage from the security camera was pixilated and the recognition system was giving zero matches. A detective decided to swap the picture with one of actor Woody Harrelson – claiming the suspect resembled the movie star – to get hits.
A New York Knicks player had his picture fed into a facial recognition system by the NYPD, this time to find a Brooklyn man wanted for assault.
Police departments are also feeding poorly drawn sketches into facial recognition searches. A forensic sketch was fed by Washington County police into Rekognition – Amazon’s facial recognition software.
When poorly drawn sketches and celebrity pictures do not work, there are software tools that can generate a composite image. Whilst some editing is understandable, computer-generating an image can “often amount to fabricating completely new identity points not present in the original photo”, the study adds.
Experts and Garvie are especially worried about facial recognition applied to body cameras. “Facial surveillance makes mistakes,” noted Garvie. “On-body cameras that may lead to an officer drawing their weapon and using deadly force under the mistaken belief that a person is a suspect or threat to public safety because an algorithm told them it was a match. That’s too risky of an application to ever consider.”