Digital surveillance technology that is currently in use is said to be insufficiently effective, with reports making this conclusion in the wake of the Uvalde mass shooting.
Keeping children safe is one justification given by schools who use services that monitor students’ social media activities in search of posts or behavior that would indicate violence was afoot. Justification is needed because the way these services operate can represent a threat to privacy and free speech.
It now turns out that the Uvalde school where the tragedy occurred had in the 2019/2020 school year bought a subscription to use one such surveillance service, Social Sentinel, that monitors students’ online conversations.
Reports currently cannot confirm that the tool was being used by the school at the time of the shooting, and if it was, clearly, it had failed miserably.
The entire concept seems to come down to the old adage about the futility of giving up liberty to purchase temporary safety – or the appearance of it. Social Sentinel is now owned by Navigate360, and this company is so far refusing to make any comment regarding the situation.
Some media seem to indicate that the problem is that Social Sentinel can only surveil public posts, while allegedly the Uvalde shooter’s posts indicative of violent intent were happening only in private messages.
Others, like representatives of Human Rights Watch who spoke for The Verge focused on the danger of harm being done to children by using “unproven and untested” surveillance technologies.
Before the Uvalde massacre, many reports and non profits were revealing not only that the use of surveillance tools targeting students’ social media presence was “exploding” in schools across the US – but that this technology was unreliable to the point that the use of the word “exploding” in the context in which it was just used could easily be flagged as “a threat.”
Posts that had students reference a movie called “Shooter,” or describe their credit score as “shooting up” were all incorrectly flagged – while actual treats were slipping through the cracks, in what is likely yet another example of how unreliable algorithms, including those based on machine learning, are as tools of surveillance and/or censorship.
“Though the efficacy of services like Social Sentinel is contested, investors have backed social media monitoring companies to the tune of tens of millions of dollars, betting on the longevity of digital surveillance as a feature of the educational landscape,” writes The Verge.