The Brussels-based EU DisinfoLab non-profit is exploring how useful and efficient large language models (LLMs) chatbots are in advancing online censorship (“moderation”), particularly as that relates to “misinformation.”
Although not formally a part of the EU, the group is involved in a number of “anti-disinformation projects” funded by the bloc and makes policy recommendations to its institutions and member countries.
One of those possible “recommendations-in-the-making” now appears to be a push to make chatbots’ capabilities as tools of censorship greater, while the EU’s censorship law, the Digital Services Act (DSA), is mentioned as the legal framework that would allow for this.
A DisinfoLab report, “Terms of (dis)service: comparing misinformation policies in text-generative AI chatbot,” positions the purpose of the research as looking into “misinformation policies” used in 11 top chatbots in order to determine if they are doing enough to avoid being “misused or exploited by malicious actors.”
One of the conclusions is that right now, terms of service applying to the chatbots EU DisinfoLab picked are lacking when it comes to explicitly enforcing censorship, and predicts that the “currently inadequate” ways chatbots use to “moderate against misinformation” will only get worse – if, that is, they remain unregulated.
This is where the DSA comes in, with the report asserting that the law’s general provisions require online platforms to remove “illegal content expeditiously once they have actual knowledge of its illegality” – a liability that platforms are now allegedly skirting by avoiding the inclusion of “fact-checking.”
Europe is not the only place where the possibilities of enlisting chatbots as foot soldiers in “the war on disinformation” are being considered. The New York State Assembly is now looking to “make chatbots accountable.”
A proposal (Bill 025-A222) has been presented that would regulate the space by making companies behind chatbots liable for failing to provide “accurate information.”
The bill, introduced by a Democrat member of the State Assembly, also seeks to add information defined as “materially misleading, incorrect, contradictory, or harmful” to the list of things companies would not be allowed to disclaim liability for.
And these categories – already broad, are made even more open to interpretation as the proposed text states that chatbot operators would be considered liable if those types of information result in financial loss – but also, “other demonstrable harm.”