Clicky

Researchers say they can detect Twitter “disinformation” before a user even posts it

The idea of a "precrime" is getting closer to manifestation.

Tired of censorship and surveillance?

Defend free speech and individual liberty online. Push back against Big Tech and media gatekeepers. Subscribe to Reclaim The Net.

Given its recent track record of censorship, banning users and muting and deranking their content with labels, it’s not hard to imagine Twitter might be sorely tempted to implement any new technology that facilitates and even “improves” on this policy.

Especially if it happens to open up a whole new horizon of censoring that’s not limited to punishing “thought crime” that has already occurred – but predicting and dealing with “pre thought crime.”

That is essentially what a new algorithm developed by researchers from England’s University of Sheffield aims to achieve: predict if a user will “spread disinformation before they actually do it.”

The hope stemming from this dystopian-sounding method, which is said to be artificial intelligence (machine learning)-powered, is to help social media giants like Facebook and Twitter, but also governments, come up with their own, new ways of clamping down on what’s deemed to be disinformation.

The study that the algorithm builds upon is based on the authors’ preconceived notion of reliable and unreliable news sources; from there, the researchers looked into Twitter users sharing posts from both categories. All in all, a million tweets from some 6,200 users were included to produce the algorithm whose accuracy in predicting “pre thought crime” is said to be as high as 79.9 percent.

The study and the algorithm deal with natural language processed by computers, and what betrayed users as sharing from “unreliable” sources was their frequent use of words such as “liberal,” “government,” “media” – you don’t really need artificial intelligence to understand that these people are interested in politics. On the other end of the study’s results are the “good” users who are unlikely to spread disinformation, as they mostly steer clear of politics, share from “reliable” sources, and are prone to tweeting about their personal lives. (How the researchers or their algorithm can be sure this content is not full of “disinformation” is not exactly clear).

Unlike the “misinformation spreaders” who often also use “impolite” words, the opposite category favors a vocabulary composed of words like “excited,” “birthday,” “wanna,” “gonna,” etc.

Other than helping tech companies and governments clamp down on speech (i.e., disinformation) more efficiently, the study also hopes social scientists and psychologists can use it to “improve their understanding of such user behavior on a large scale.”

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net.

Tired of censorship and surveillance?

Defend free speech and individual liberty online. Push back against Big Tech and media gatekeepers. Subscribe to Reclaim The Net.

Read more

Share