The dual approach of talking up the benefits of AI when it comes to using this still very much emerging tech to combat “disinformation,” while warning against the perils of AI in creating that same “disinformation” – continues.
The point at which these two approaches converge is censorship – “both disinformation warriors” who want to use AI in their fight, and AI doomsayers who claim deepfakes will destroy democracies, work towards “monitoring,” “labeling,” and ultimately, controlling content.
And sometimes they’re the same informal but powerful groups, or government agencies and legacy media.
In this “installment” of the AI story coming from the World Economic Forum (WEF), authored by heads of AI, Data, and Metaverse Cathy Li and Global Coalition for Digital Safety Project Lead Agustina Callegari, we learn that WEF would like policymakers, tech firms, researchers, and civil rights groups to all band together and push for deployment of advanced AI-driven systems combating “disinformation and misinformation.”
The technique they would like explored, developed, and used would rely on pattern, language, and context analysis “to aid content moderation.”
The two authors of the post published by WEF are optimists: they think (or say they do) that AI-driven content analysis is at a level where it is capable of “understanding” context almost perfectly – or as they put it, understanding “the nuances between misinformation (unintentional spread of falsehoods) and disinformation (deliberate spread).”
The article speaks favorably about authenticity and watermarking of content – such as is done by Adobe, Microsoft, et al., through their Coalition for Content Provenance and Authenticity (C2PA), throwing the obligatory bone in the direction of those worried about privacy and protecting journalists from persecution “in conflict zones” (but what about journalists in all the other zones?)
Once again, WEF is pushing for a “comprehensive system” – in other words a type of standardization that it would like to have a hand in, which would guide the development of particularly those AI segments useful in censoring “disinformation,” etc.
WEF truly wishes to position itself at the center of all this, as its representatives write about “creating guardrails for AI,” and noting the existence of the informal globalist group’s AI Governance Alliance – “a flagship initiative by the World Economic Forum and part of the Center for the Fourth Industrial Revolution.”