The World Economic Forum (WEF) is “thinking of the children” – especially, it seems, if that real or convenient concern can be worked into the agenda of enhancing and expanding the limits of online censorship.
The particular focus of a piece published on the WEF site, from the pen of its “formative content” senior writer, has to do with the age of those who are signing up for social media, and the harms perceived to come out of allowing children under 13 to be present on social media.
Related: How the “think of the children” narrative is being used to crush online free speech and privacy
The UK has long struggled with ways of adding age verification as declaratively a way to protect children, but at the same time raising many red flags regarding how these checks could seriously undermine privacy of everybody, and be misused and abused. WEF now cites UK’s telecommunications regulator Ofcom as presenting a report it commissioned, about the age of children on social platforms.
The WEF post doesn’t go deep into the Ofcom report’s methodology in coming up with the results, but the results are the following: children not only use fake age to sign up to social media, but often have their parents’ consent for this, and more – the parents are helping under 13’s get on there.
However, the tone of both Ofcom and WEF’s reporting suggests that these organizations may believe they know better what’s good for the child than the parents themselves.
Ofcom thinks that the parents in many cases don’t want their children to “miss out” – while the numbers say about a third of kids and teens aged 8 to 17 had set up accounts providing an adult age.
WEF’s Global Coalition for Digital Safety Project Lead Minos Bantourakis says this situation can expose children to harmful content including violence, grooming, hate speech, self harm, suicide, and sexual content.
To prevent all that, WEF has a solution: in their own words, a wide range of interventions. And if you wondered what the Global Coalition for Digital Safety might be, it’s “a public-private partnership that draws together tech platforms and online safety organizations alongside academia, civil society and government in a project to enhance safety in digital spaces.”
And once again, through this group, WEF wants to be “where it’s at” when it comes to setting standards and rules for everybody.
First, establish “a set of global principles for digital safety to ensure human rights, privacy and security.”
Next, “the panel will also work to create a toolkit for designed-in safety interventions that could include content removal, warning labels as well as proactive tactics to improve safety.”
And lastly, WEF calls for “digital safety risk assessment framework, which platforms would use to assess digital safety risks and measure the impact of interventions.”