Big Tech is doubling down on using artificial intelligence (AI) and its machine learning subset for moderation and or/censorship of content on their platforms.
That’s if Twitter’s latest acquisition, Fabula AI, is anything to go by.
According to Twitter’s acquisition statement, the UK-based company’s specialty is helping spot fake news.
Twitter has already invested in machine learning companies, such as Madbits and Whetlab, the report observed – but what makes Fabula AI special is that its tech is all about “fighting the spread of misinformation online.”
Fabula AI is a startup was set up just when the market demand for services it offers was spiking: 2018. This was two years after the previous, and two years before the next US election – that event which sparked the “fake news” narrative. And this point is not lost on VentureBeat, who notes that with the 2020 ballot, social media companies “will be under intense scrutiny for their handling of fake news.”
And this is “partly” why Twitter is now spending money on AI-based tech as a means of moderation, including this latest deal, the details of which have not been revealed. Twitter’s Sandeep Pandey, who is in charge of the operation, was, as expected, vague when it comes to the way they will proceed: his group will “work towards finding new ways” to use machine learning in such areas as natural language processing and recommendations systems.
Almost as an afterthought, it is added that machine learning ethics will also be a part of this effort to “work toward finding new ways.”
It seems clear that the common goal of social media giants should be to invest whatever trusted, unbiased, efficient, and accountable methods may be needed to secure that misinformation is not being spread online – but instead, Twitter had decided to rely on AI.
“Machine learning plays a key role in powering Twitter and our purpose of serving the public conversation,” Twitter CTO Parag Agrawal spelled it out.
That is the case even if, time and again, this method has proven itself to not be even close to the level of sophistication that would justify employing it in such a sensitive and potentially damaging operation like telling where real news ends, and misinformation begins. And these shortcomings bring with them the real risk of censorship.
Talk about endangering a democracy.