Spanish police will soon be allowed to use an automatic facial recognition tool, dubbed ABIS (automatic biometric identification system). ABIS uses artificial intelligence to identify suspects from a database that is currently under development.
ABIS uses an algorithm called Cogent, which is developed by the French military technology company Thales. The program compares images introduced by the police, like an image obtained from a security camera, to images stored in a database, which will contain over five million images of suspects and detainees already on file. Those arrested after the system is launched will be added to the database.
The database will not use images from civil database records, like the database containing photos used for national identity documents.
The ministry of interior said they have been working on the system for three years. The ministry said that ABIS will be used to investigate serious crimes and insists it will not be used for surveillance.
In Spain, police have two ways of identifying suspects when there is no suspect; fingerprint and DNA analysis. Now they will have a third option, Morning Express reported. Without automatic facial recognition, it is impossible to start a search for a suspect in footage without something that will narrow the search.
The database containing facial images will be the same one that contains DNA and fingerprint samples. The data is shared with other EU member states under the Schengen Information System (SIS).
“The Spanish ABIS system can connect with European databases, such as Eurodac, EU-Lisa or VIS, since the corresponding links are designed. It is not an isolated system, but rather it is interconnected with the countries of the European Union,” sources in Thales explained.
EL PAÍS reported that the Spanish Agency for Data Protection (AEPD) has contacted the ministry of interior “to address various projects of the Ministry that could have an impact on data protection.” The data protection agency was not aware of ABIS until July. It wants to determine the risks the system poses to the rights and freedoms of citizens. It also wants to know how long the police will keep the images of suspects, who should have access to the data, and so on.
Perhaps the bigger problem is that algorithms make mistakes, and in this case it is not something as unimpactful as the wrong music recommendation. In the US, Robert Williams was arrested and taken to jail mistakenly because a facial recognition system confused him with someone else. Facial recognition technology is highly inaccurate in recognizing people of color.
Brussels has categorized facial recognition technology as “high risk” and is working on regulation to adopt approaches that address the potential risk artificial intelligence systems carry. However, it has green-lit the use of facial recognition for “the purposes of preventing, arresting, or investigating serious crimes or terrorism.” The technology is not allowed to be used for surveillance.
Cogent has been approved by NIST, an independent US organization. But some experts do not feel that is enough.
“NIST does not say that algorithms are good or bad. And in addition, the organization proposes several evaluations with different objectives, and we do not know which ones they refer to,” says Carmela Troncoso, professor at the Federal Polytechnic School of Lausanne in Switzerland.
Eticas Consulting, a company that specializes in auditing algorithms, shares the same sentiments: “In accordance with European regulations, the proportionality of high-risk technologies must be justified and what is expected to be achieved with them must be established. It is also necessary to know what precautions have been taken to avoid algorithmic biases: it is proven that these systems identify white people better than the rest, so you have to prove that they do not make mistakes with blacks.”