Research carried out in the United Kingdom found that companies do not apply enough measures to combat the distribution of images that include child exploitation and that they do not have barriers to prevent children under 13 from using services such as social networks.
This new report that points to social networks as the main accused is the result of a series of public hearings held between January 2018 and May 2019 where the problem of the United Kingdom of being one of the largest consumers of child pornography in the world was discussed.
The investigation pointed to social networks and image applications as the main culprits in this illicit market.
The inquiry concluded that the lack of preventive actions on social networks has caused an increase in these crimes. In turn, all companies were accused of taking action only when their reputation is at stake.
Another problem that was discussed is that of age restrictions. Most sites only require the user to indicate their birth date, which can be easily falsified.
After the accusations, some technology giants such as Google, Microsoft, and Facebook indicated that these are global issues so they have never stopped working on new technologies that allow them to recognize these crimes without the need for the human factor.
In search of possible solutions
In addition to pointing out culprits, the research suggests that companies must apply new security measures before September 2020. One of the most controversial proposed solutions is that all images must go through an upload filter and be verified and approved before being published.
Microsoft has tried to do this in the past with PhotoDNA technology, but it works after the material is published, not as it’s being uploaded.
Although many companies indicated that detecting child nudity is very difficult, especially when it is live-streaming, the report mentions the French application Yubo which seems to have an algorithm specialized in this work.
End-to-end encryption is also mentioned, which makes messages completely private even to the service provider.
Applications like WhatsApp and iMessage have already faced accusations that this protection only serves so that the police cannot detect criminals. However, attempts to remove it have been in vain after pushback from digital rights groups.
MORE: Digital rights group suggests meme ban upload filters could clash with privacy laws