Crime is on the rise in many US cities and states, and this fact is being used by law enforcement in some of these places to push back against legislation that aims to limit the use of biometric surveillance in the form of facial recognition.
And so far, the police have been successful in turning the tide, with Virginia already poised to drop a law banning facial scanning not even a year after it was adopted. In all, about 25 states now have this type of rule in place, but reports suggest that the future of these bills is far from certain.
According to Reuters, the same scenario as in Virginia is likely to play out in New Orleans and California. Meanwhile, those states like New York, Colorado, and Indiana that are planning to ban the tech are coming under pressure to abandon their respective bills.
The police and those lobbying on their behalf insist that they need facial recognition technology – even though it is regarded as impermissibly privacy-invasive, inaccurate, biased, and potentially a dangerous tool in the hands of a surveillance state.
Every time law enforcement at various levels comes up with their arguments in favor, whether adding more controversial technology in combating terrorism or “regular” crime, the question is asked how they ever managed to do their job in the past, when these tools were unavailable.
But some see the motivation as more sinister than simply the desire to make their work easy. ACLU attorney Jennifer Jones for one thinks that the police are using crime statistics and “exploiting people’s fears” to make themselves more powerful, The Register reported.
Not only controversial new technologies but also questionable policies that would otherwise cause far more opposition are traditionally “slipped in” while the population is distracted with bigger problems. Jones also makes a point of that. “This has been for decades, we see new technologies being pushed in moments of crisis,” the attorney said.
In the meanwhile, effort and money is being poured around the world into moving the needle on the current state of “Artificial Intelligence,” which at this point mostly just really means, “Machine Learning.”
Privacy and data security-wise, though, things are likely to get worse rather than better as these projects make progress.