Governments and companies are exhibiting a growing trend of wanting to, simply put, ultimately own people using their products – whether it’s their biometric data collected via ever more advanced facial recognition and other mass surveillance tools, or personal data exposed and harvested on social networks, apps, and elsewhere on the internet.
All that doesn’t seem to be not enough, though: now the push is to try to “read users’ minds” – starting with figuring out the state of their emotions. There’s “invasive” – and then there’s “invasive” – and one of those reportedly planning to implement a high level of invasiveness is Zoom, the pandemic’s videoconferencing success story.
Reacting to all this is a group of more than two dozen organizations located around the world, who have now addressed founder and CEO of Zoom Eric Yuan in a letter to express their concern over the plans to introduce “emotion tracking software.”
The civil organizations – including Access Now, PEN America, ACLU, Surveillance Technology Oversight Project, and Defending Rights & Dissent, among others – not only believe that artificial intelligence (AI) at this stage of its development is not actually capable of analyzing human emotions, but that the very idea to try to use this technology to this end is a violation of privacy and human rights.
The letter, perhaps unwittingly, destroys the Big Media-pushed narrative of the current state of AI, by denouncing it as pseudoscience, falsely claiming that “emotion analysis” is actually a thing. Many avid observers of AI development – and deployment, including the narratives promoting today’s machine learning-centric tiny subset of AI as “the real thing” – would likely agree.
True, however, perhaps to the nature and purpose of the groups that signed it, the letter doesn’t go so much into the tech itself, but into stressing that it might be “discriminatory,” including among racist lines.
The letter is light on the technical shortcomings of the version of AI that is currently available, and the way its use may negatively impact everybody – just because it’s bad – and heavy on how it affects “certain ethnicities and people with disabilities, hardcoding stereotypes.”
Zoom is not commenting on the letter for now, and nobody’s (yet) asking if this may also be a case of false advertising.
Namely, Zoom’s plans reportedly are to offer “post-meeting sentiment analysis for hosts.”
If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net.