Amid rising concerns over a string of violent attacks potentially linked to a single perpetrator, the Austin Police Department has been authorized to begin using facial recognition technology.
While the move is being portrayed as a necessary investigative aid, it marks yet another step toward embedding controversial biometric surveillance into routine law enforcement practices, often without sufficient public oversight.
APD officials claim the rollout will be narrow in scope: only the Robbery Unit will have access, and the software will be used solely to examine video and digital evidence tied to specific crime scenes.
They’ve stated that images of bystanders and witnesses will not be uploaded into the system. But even with these restrictions in place, critics of facial recognition warn that such safeguards are often inadequate and difficult to enforce in practice.
The policy stems from a 2020 city resolution that allows facial recognition use only when an imminent threat is involved. It also requires notifying both the public and city council once the technology has been deployed.
Still, these procedural measures do little to address broader concerns about the growing normalization of biometric surveillance in public spaces. Under current rules, data gathered through facial recognition in first-degree felony cases can be stored for up to ten years, a retention window that raises questions about necessity and proportionality.
Despite APD’s assurances of a focused approach, once these tools are integrated into police operations, they rarely remain limited. What begins as an exception frequently becomes routine, especially in the absence of strict external accountability.
Austin’s adoption of facial recognition mirrors a broader, uneven pattern across the US. While some agencies have backed away from the technology, such as the Los Angeles Police Department, which imposed limits in 2020; others have embraced it without fully grappling with its risks.
The El Cerrito Police Department, for example, has turned to Clearview AI, a company that has drawn international scrutiny for scraping billions of images from the internet without consent.