And so it begins. In fact, it hardly ever stops – another election cycle in well on its way in the US. But what has emerged these last few years, and what continues to crop up the closer the election day gets, is the role of the most influential social platforms/tech companies.
Pressure on them is sometimes public, but mostly not, as the Twitter Files have taught us; and it is with this in mind that various announcements about combating “election disinformation” coming from Big Tech should be viewed.
Although, one can never discount the possibility that some – say, Microsoft – are doing it quite voluntarily. That company has now come out with what it calls “new steps to protect elections,” and is framing this concern for election integrity more broadly than just the goings-on in the US.
From the EU to India and many, many places in between, elections will be held over the next year or so, says Microsoft, however, these democratic processes are at peril.
“While voters exercise this right, another force is also at work to influence and possibly interfere with the outcomes of these consequential contests,” said a blog post co-authored by Microsoft Vice Chair and President Brad Smith.
By “another force,” could Smith possibly mean, Big Tech? No. It’s “multiple authoritarian nation states” he’s talking about, and Microsoft’s “Election Protection Commitments” seek to counter that threat in a 5-step plan to be deployed in the US, and elsewhere where “critical” elections are to be held.
Critical more than others why, and what is Microsoft seeking to protect – it’s all very unclear.
But one of the measures is the Content Credentials digital metadata scheme, similar to meme stamp watermarking. However, considering that the most widely used browser, Chrome, is not signed up to the group (C2PA) that spawned Content Credentials, the question remains how helpful it will be to political campaigns using this tech in their images or videos, “to show how, when, and by whom the content was created or edited, including if it was generated by AI.”
Meta (Facebook) also announced its own effort in the same vein, seeking to combat altered content such as deepfakes – in case they “merge, combine, replace, and/or superimpose content onto a video, creating a video that appears authentic (… and) would likely mislead an average person.”
As ever, a very clear and concise, easy to enforce definition – not.
And who will help enforce it? No surprises there.
According to reports, Meta will “rely on ‘independent fact-checking partners’ to review media for fake content that slipped through the company’s new disclosure requirement.”