In 2013 Edward Snowden made explosive revelations about the scope and depth of internet surveillance carried out by the United States and its allies, that confirmed what had up until that point been some rather wild conspiracy theories.
His decision to publish, with the help of several global media outlets, a large number of confidential documents was a rather dramatic attempt of raising public awareness – a task that may or may not have been accomplished. But it was also a move that polarized opinions in his own country, and one that has cost him dearly: he has become one of the figures that Breitbart refers to as “western dissidents.”
Snowden now lives in exile in Russia, and has lately become more present in the media thanks to the fact he is promoting his memoir. And when this whistleblower shares his opinion on some of the most controversial issues pertaining to the tech industry, people tend to listen.
In an interview with the Guardian, Snowden touched on the dangers that future misuses of artificial intelligence (AI) can present to civil liberties and human rights.
Specifically, he mentioned using AI to power facial recognition and similar technologies. According to him, as these capabilities grow, security cameras could take the place of human police officers – thus introducing a “robot” element into some very complex and sensitive activities, such as law enforcement. “An AI-equipped surveillance camera would be not a mere recording device, but could be made into something closer to an automated police officer,” Snowden said.
China is a country that is known not only to have advanced this type of technology but also as not being shy of deploying it. This was once again brought to light during the ongoing Hong Kong protests – as demonstrators tried to find ways to defend their identities from the authorities.
But, as the report notes, even western democracies use similar methods to police their cities. However, one of the problems with the current state of AI becomes evident once again – regardless of where it is deployed, it simply isn’t good enough at this point.
This means that mistakes keep piling up, whether it’s Facebook’s “content moderation” algorithms, or, for example, those used by the London police in the hope of catching criminals.