The US Department of Health and Human Services (HHS) has released its general Covid advisory on “confronting health misinformation” signed by US Surgeon General Vivek H. Murthy.
We obtained a copy of the report for you here.
For the purposes of the advisory, health misinformation is defined as information that is false, inaccurate, or misleading “according to the best available evidence at the time.”
And among those who are urged and given suggestions on how to act to suppress this kind of information are technology platforms, who are advised to devise “clear consequences” for users who are branded as “misinformation super-spreaders.”
Technology platforms are told to assess the benefits and harms of their platforms and products, and then “take responsibility for addressing the harms.”
And those who are found to be “super-spreaders” and “repeat offenders” in posting misinformation should be faced with “clear consequences” that tech platforms are supposed to devise and impose on their users.
Tech companies are also expected to commit to long-term investments for the purpose combating misinformation that can include changing their products – such as redesigning recommendation algorithms. The idea here is to tweak these algorithms so that unwanted medical information is down ranked and difficult to discover.
The surgeon general also wants tech platforms to put more “frictions” in place – like labels and warnings that now appear on many social media posts in order to dissuade users from interacting with the content in question, or direct them towards “trusted sources.”
Next, unspecified researchers should be given access to Big Tech’s data so that they can learn “what people see and hear, not just what they engage with.” Another reason is for “researchers” to be sure how platforms are moderating and censoring content – some methods mentioned are labeling, removing, and downranking.
Privacy is also paid lip service in a remark that says user data “can be” anonymized while provided with user consent – but the advisory is not explicit that this “must be” the case.
And not all medical (mis)information is happening in English, so these US platforms are recommended to “increase staffing of multilingual content moderation teams and improve the effectiveness of machine learning algorithms in languages other than English.”
The advisory also wants platforms to amplify, i.e., direct users even more aggressively towards “trusted and credible sources” and those who are accepted as experts.
Then there’s – as the advisory phrased it – the “unintended consequences” of censorship. And that’s not about free speech suppression or anything similar – it’s “migration of users to less-moderated platforms.”
And that is another thing, the US surgeon general writes, that tech platforms should “work to understand.”