Clicky

Taxpayer Funded Research Seeks to Devise New Stealth Censorship Technology

It suggests that a combination of fact-checking, "nudges and reduced reach,” and “account banning” could reduce so-called "misinformation" by up to 63%

If you’re tired of censorship and surveillance, subscribe to Reclaim The Net.

A study, funded by US taxpayers and published in the Nature Human Behavior journal, is exploring “innovative” – specifically, less easily detectable ways to censor people online.

As reported by Just The News, the research, conducted by University of Washington (its Center for an Informed Public (CIP), which is a recipient of state-funded grants), is now being slammed as a proposition to introduce “stealth” modes of censorship (that the study itself, naturally, refers to as “misinformation.”)

And it would seem that the study itself managed to fly under the radar for quite a while, considering that it was published last year.

The upper limit of the amount of unwanted information that would be eliminated from social platforms should this proposed combination of fact-checking, “nudges” and reduced reach be implemented is a whopping 63 percent, the university’s researchers promise.

And what would distinguish it from the usual ways users are silenced online is that it would not require traceable tweaking of algorithms, reports analyzing the paper suggest.

In order to strip the internet of so much content/information – and, opponents say, in the process completely silence media outlets critical of the current government – efficient “virality circuit breakers” would have to be introduced, the study suggests.

Namely, as the paper’s title (“Combining interventions to reduce the spread of viral misinformation”) indicates, the researchers were looking for methods to more efficiently stop “misinformation” (designated as such by social media companies, or those in control of them) from becoming viral.

The authors first establish where they are coming from, ideologically speaking, by repeating the mantra that “misinformation” (such as it is understood) represents a threat to democracy, public health measures (i.e., elections and Covid, or Covid-like situations, vaccination), and a wide range of issues in between, like “equity.”

To produce the study, they came up with a model which was “inspired” by research into actual viral diseases and their spread, and then applied it to the frowned-upon viral content. That content consisted of 10.5 million tweets posted during the 2020 presidential election in the US.

These posts are branded as “misinformation events” in the study, and the conclusion was that suppressing them through single methods, like “fact-checking,” etc., doesn’t work well.

Hence, the proposed “framework” that aims to combine a variety of censorship tools and methods that have been used by major platforms for years now.

The authors clearly find fact-checking good in principle, but lacking in efficiency, because it is allegedly used simply to determine what is true or false. And yet, the University of Washington study says, a lot of “disinformation” slips through these supposed cracks because it contains “partially true” statements. (The premise is that “fact-checking” is done meticulously and with great care taken to avoid falsely flagging content – which is demonstrably untrue).

But, whether the problem is there or not, the study has a solution: combining “fact-checking” with “nudges” – otherwise known as prodding users toward reaching a certain conclusion “themselves” rather than outright doing it for them – reduced reach, and account banning. Another major point is the time it takes to censor a post (with a negative view on how that is currently done), and it offers solutions for this as well.

But all of this is raising alarm amongst those who believe the study is little else than an attempt to come up with a far more sophisticated censorship “machine” than the one we have now.

Foundation for Freedom Online (FFO) Executive Director Mike Benz, who used to work for the State Department, boils down the intent of the study as providing a way to “censor people using secret methods so that they wouldn’t know they’re being censored, so that it wouldn’t generate an outrage cycle.”

Another beneficiary, according to Benz, would be the platforms themselves, who would avoid being held accountable for censorship – because users would be unaware of it.

These are serious accusations, but one of the researchers behind the study, Jevin West, went about “denying” them in a curious manner: West said that the research was theoretical, that it didn’t make any recommendations, including “policy or tactical” – and that those who might have received these recommendations, namely, the US government and social media platforms, may or may not have acted on any of the study’s conclusions.

“There was no follow-up from them and we have no idea what, if anything, any of those entities did with the learnings from our paper,” West said.

In other words, the proposed and heavily criticized by some free speech activists proposals are out there in the wild, and their “owners” apparently neither know, nor care what is happening to them.

If you’re tired of censorship and surveillance, subscribe to Reclaim The Net.

Read more

Share this post

Reclaim The Net Logo

Join the pushback against online censorship, cancel culture, and surveillance.

Already a member? Login.