The BBC has come up with a new job title – specialist disinformation reporter – and a lengthy article penned by this employee of Britain’s public broadcaster goes into how to stop abuse and threats online, especially where those target women.
In order to “investigate” how serious the problem is and how far it reaches, a film was produced for BBC’s Panorama program, while a new study is quoted as showing that women are more likely to receive abusive messages online.
The premise is that this type of abuse is increasing on the internet, that it often coincides with racism and homophobia – and that governments, the police, and tech companies behind social media sites need to protect women, who are apparently considered less capable of coping on their own than other internet users, and therefore require extra action taken from the outside to feel and be safe online.
One of the solutions to what is seen as a major and worsening problem is a proposal coming from the UN, which the author said Panorama was able to see exclusively and which seeks to force social media companies to effectively broaden their already existing systems of content and user labeling. Specifically, the UN draft calls for placing labels on social media accounts that are determined to have posted misogynistic content.
The draft would also like tech companies to have more humans (as opposed to algorithms) make decisions on what is and isn’t abusive and threatening content targeting women.
And for those users concerned that internet trolling against them might actually turn into some type of real world harm – the UN proposal would have social sites implement an “early warning system.”
The article is critical of the way UK police currently respond to reports about what is known and criminalized as “grossly offensive or obscene content” – when the author wanted the police to intervene over such messages, the response was reportedly slow and is yet to produce any results.
Social media companies are also criticized for declaring their intent to “do more” on this issue and censor such content, but eventually by and large failing to deliver on that promise – at least in the author’s experience.