Clicky

UK Police facial recognition technology only works 4% of the time

Police even scored a 100% misidentification rate in two separate instances.

Tired of censorship and surveillance?

Defend free speech and individual liberty online. Push back against Big Tech and media gatekeepers. Subscribe to Reclaim The Net.

A watchdog observing UK Metropolitan Police trials with face recognition software pointed out several flaws in the technology leading continuous misidentifications and false flags.

During a series of trials in London between 2016 and 2018, the AI produced false positives that identified people as felons when they passed through the areas with facial recognition cameras, stated Big Brother Watch in a press release.

The watchdog organization also argued that the technology “breaches fundamental human rights protecting privacy and freedom of expression and demanded the police to refrain from using it.

Director Silkie Carlo pointed out that facial recognition sets a turning point for freedom in the UK. If used by the authorities, people could be tracked all across Britain huge CCTV network. “For a nation that opposed ID cards and rejected the national DNA database, the notion of live facial recognition turning citizens into walking ID cards is chilling”, said Silkie Carlo.

To make thing worse, Big Brother Watch noted that the Police scored a 100% misidentification rate in two separate instances, at the Westfield shopping centers in Stratford.

Presently, facial recognition technology is being tested in UK supermarkets to verify the age of citizens buying alcoholic beverages and tobacco at special self-checkout machines. The company responsible for the machine that will be used in supermarkets is the US company NCR. They have announced the integration of the Yoti facial recognition technology in their “FastLane” tills of many supermarkets.

Furthermore, several retail stores are experimenting with a software called FaceFirst to build a database of shoplifters, reported Activist Post. The software is designed to scan faces from up to 100 feet away, take several pictures of the subject with the store’s camera and keep the highest quality one. The software compares the photo taken with a database of “bad customers” that the owner of the shop compiled and sends an alert if there is a match.

Many privacy advocating groups, attorneys, and even Microsoft – proprietary of its own facial recognition software – pointed out at the issues of consent, racial profiling, and the potential to use facial recognition cameras’ images as evidence of misconduct by law enforcement.

As summarized to BuzzFeed News by Jay Stanley, attorney with ACLU, “We don’t want to live in a world where government bureaucrats can enter in your name into a database and get a record of where you’ve been and what your financial, political, sexual, and medical associations and activities are, and we don’t want a world in which people are being stopped and hassled by authorities because they bear resemblance to some scary character.”

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net.

Tired of censorship and surveillance?

Defend free speech and individual liberty online. Push back against Big Tech and media gatekeepers. Subscribe to Reclaim The Net.

Read more

Share