The UK police are testing an AI system whose purpose it so sift through Twitter comments to predict crime before it happens. But not just any type of crime: this concerns specifically hate crimes stemming from religious or ideological tensions.
The Daily Telegraph reports that the system, using “hate tweets,” will be introduced to pinpoint those real-world locations where crimes linked to such sentiments might happen.
In a report the website dubbed this idea as the use of “Minority Report-style algorithms,” something that has previously been attempted by China, which makes use of the data from its “extensive surveillance network.”
Which brings us to the question of the amount of data the UK police will be able to access thanks to the AI system created by the Cardiff University – because true AI requires very large data sets.
Double your web browsing speed with today's sponsor. Get Brave.
That’s something that China has likely amassed from the surveillance network deployed in a nation of some 1.5 billion people – but The Telegraph report doesn’t go into the how much data Twitter bots will have to work with to bring useful results to the UK police. With digital civil liberties in the country being some of the lowest in the Western world, that data may soon pile up.
The report explains that the system has been tested in the past and that researchers have been able to prove a connection between “hate tweets” and increased crime in the physical world, specifically, “racially and religiously aggravated crimes.”
The police will have the bots at their disposal starting October 31. And that’s no coincidence: it’s the day the UK is expected to finally leave the European Union. As the report puts it, the police will “track racist and hateful comments targeting religious and ethnic minorities across the country to measure sentiment after Brexit deadline day.”
In fact, the testing deployment of this technology at this point in time could be seen as a preemptive, rather than a predictive measure; if citizens know that there is a system that can locate them in the real world, they would probably be less prone to inciting hatred or making threats on Twitter – and they may even hesitate to simply speak their mind.
Cardiff University’s Matthew Williams described the country’s political moment as “toxic” and said that the AI system will be used to see if Brexit will result in an increase of hate speech – and, presumably, in related physical crime.
“There has been talk of riots on the streets, and there is an expectation that tensions will bubble up around that date,” he said.