The UK government has published a policy paper, “Draft Statement of Strategic Priorities for Online Safety,” that sets sweeping censorship, surveillance, and online age verification demands before the regulator Ofcom, which is responsible for enforcing the Online Safety Act.
The paper states that Ofcom “must have regard to the statement when exercising its regulatory functions on online safety matters” – meaning that while the implementation of the priorities laid out in the document is not a legal requirement, Ofcom is under legal obligation to consider them, demonstrate that it has done so, and explain why the priorities are not implemented, where that is the case.
The first priority is “Safety by Design,” with the document’s authors invoking the Southport riots to once again focus on the alleged role of social platforms (rather than the events’ immediate or broader causes).
But that narrative is used to cement the need for the government to “hold platforms to account” – at the same time pushing for development and implementation of age verification tech.
While the controversial Online Safety Act is promoted as a way to protect children online, this paper speaks about it also serving “some adult users” who would “benefit from access to additional protections from content which does not meet the bar for illegality but could still be harmful.”
“Safety by Design” also demands “robust” policies and tools from platforms to “minimize (…) misinformation and disinformation presenting a risk to national security or public safety.”
Another thing that needs minimizing is “the damaging effects” of “emerging information threats” while the government worked in the obligatory concern about “AI coordinated inauthentic behavior at scale.”
“Transparency and Accountability” is about Ofcom’s Advisory Committee on Disinformation and Misinformation (this body is now known as the Online Information Advisory Committee).
Here, the government’s priority is to see “forward-looking, impact-focused advice” from the Committee – “so we can better understand how misinformation and disinformation can be tackled online.”
“Agile Regulation” looks into developing technologies and the internet as a whole, from the point of view of those being sources of risk and challenges to citizens’ safety.
More:Â The Digital ID and Online Age Verification Agenda
The intent is to provide room for Ofcom to engage in “rolling censorship” – the constantly updated measures to keep up with these perceived, ever-developing risks and threats.
Ofcom is also instructed to “mitigate risks to users emerging from the sharing of AI-generated content on regulated services, and the deployment of AI by regulated services on their platforms, such as AI-driven content recommendation systems and embedded AI tools for users.”
This section praises the Global Online Safety Regulators Network but wants Ofcom and the government to engage in more international cooperation to “enhance regulatory coordination and coherence” globally.
Ofcom is also expected to take action against “small but risky services” if they are found to spread “illegal misinformation” and “misogynistic content.”
“Inclusivity and Resilience” speaks about the government “welcoming Ofcom research into risks and possible interventions” regarding “AI tools that are capable of generating realistic content.”
“Technology and Innovation” pushes advanced age assurance tools as well as government-backed third-party solutions to “online harms.”