Clicky

Google Vows To Use AI Models and Work With EU Anti-“Disinformation” Groups and Global “Fact-Checking” Groups To Censor “Misinformation,” “Hate”

The Big Tech giant is ramping up ahead of European Parliament elections in 2024.

Tired of censorship and surveillance?

Defend free speech and individual liberty online. Push back against Big Tech and media gatekeepers. Subscribe to Reclaim The Net.

Does the European Parliament need the “support” for its elections from a tech behemoth like Google? Google certainly thinks so, as does the EU.

And Google is doing it the best way it knows how: by manipulating information. A blog post on the giant’s site calls this “surfacing high-quality information to voters.”

And with the way content is handled by Google on its platforms and services, where something “surfaces” other things “sink” – i.e., information gets deranked.

That’s one thing to keep in mind, and another is the question, who decides and based on what criteria, what “high-quality information” is. One might say, only half-jokingly, “Democracy called and wants to know.”

In addition, Google is vowing to use artificial intelligence tech more, to counter what it decides is misinformation around elections, and leverage AI models “to augment our abuse-fighting efforts.”

Working with EU’s various “anti-disinformation” groups and “fact-checkers” from around the world to facilitate censorship is also part of the promised “support package,” while the targets of this censorship will be the usual list of online bogeymen (as designated by Google and/or governments), real or imagined: manipulated media, hate, harassment, misinformation…

All this will have to be done at scale, Google notes, hence the promise of bringing in more AI (Large Language Models, LLMs, included) than ever.

When it comes to “surfacing high-quality information” – some of what’s presented is uncontroversial. If people search on how to vote, the search results will provide relevant details regarding requirements, dates, etc. But then there’s also “authoritative information,” specifically on YouTube.

Things get considerably muddier here: “For news and information related to elections, our systems prominently surface content from authoritative sources, on the YouTube homepage, in search results and the ‘Up Next’ panel,” the blog post states, adding, “YouTube also displays information panels at the top of search results and below videos to provide additional context from authoritative sources.”

And this could mean putting panels above search results “on videos related to election candidates, parties or voting.”

As for battling “misinformation,” Google says it will improve its enforcement and put more money in the Safety Engineering Center (GSEC) for Content Responsibility, among other “trust and safety” departments.

Ad disclosures, as Google sees them, will also mean revealing whether content is synthetic and “inauthentically depicts real or realistic-looking people or events” (that’s parody and satire out the window).

Plus, users will be seeing notes supposed to inform them about “the credibility and context” of images in search, as well as the use of Google DeepMind SynthID’s digital watermarking for AI-generated content.

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net.

Tired of censorship and surveillance?

Defend free speech and individual liberty online. Push back against Big Tech and media gatekeepers. Subscribe to Reclaim The Net.

Read more

Share