The upcoming social media regulation bill in the UK, dubbed the Online Safety Bill, should require social media platforms to provide real-time data on “disinformation” – that’s according to fact-checking activists. The activists also raised concerns about the over-dependence on AI in content moderation.
If the upcoming Online Safety Bill passes, social platforms would be required to clearly state the types of content and behavior that is acceptable in a clear and accessible manner in their terms and conditions. The bill also suggests that these companies should inform their users how they will handle legal but potentially harmful content.
Additionally, Category 1 tech platforms, those with millions of users such as Facebook, Instagram, TikTok and Twitter, will be legally required to publish transparency reports on the measures they are taking to handle harmful content. They already do.
Failure to do so, Ofcom, which will have oversight over online platforms, will fine the companies 10% of their global annual turnover or £18 million (whichever is higher). Ofcom will also have the authority to shutdown access to a platform that does not comply with the rules.
As part of an ongoing inquiry into online free speech, the House of Lords’ Communications and Digital Committee invited several fact-checking experts to provide their input.
Will Moy, the CEO of Full Fact, a fact-checking activist group, said that the proposed rules in the bill are not enough, the law should require online platforms to provide real-time data on disinformation.
“We need real-time information on suspected misinformation from the internet companies, not as the government is [currently] proposing in the Online Safety Bill,” Moy said. He also suggested that Ofcom should be granted the power to demand information from the companies that will fall under its purview.
Moy continued to make his point by pointing out that these companies use AI-powered algorithms not just for moderation but to take action on every piece of content. For example, these AI algorithms determine how many people will see a specific type of content and how it is displayed.
“We need independent scrutiny of the use of artificial intelligence [AI] by those companies and its unintended consequences – not just what they think it’s doing, but what it’s actually doing – and we need real-time information on the content moderation actions these companies take and their effects,” Moy said.
“These internet companies can silently and secretly, as [the AI algorithms are considered] trade secrets, shape public debate. These transparency requirements therefore need to be set on the face of the Online Safety Bill.
“Those choices[those made by AI-powered algorithms] collectively are more important than specific content moderation decisions. Those choices are treated as commercial secrets, but they can powerfully enhance our ability to impart or receive information – that’s why we need strong information powers in the Online Safety Bill, so we can start to understand not just the content, but the effects of those decisions,” he explained further.
Moy also raised concerns on online platforms over-reliance on AI for moderation.
“Although internet companies have very fine-grained control over how content spreads, the technology that tries to support identifying false or misleading information is both limited and error prone, like a driver who has great speed control, but poor hazard awareness,” he said.
“And in a sense, that’s not surprising, because there is no single source of truth for them to turn to, there’s not a source of facts you can look everything up against. This kind of technology is very, very sensitive to small changes, so even when good reference data is available, which it often isn’t, the technology is limited in what it can actually do,” Moy added.