Some fear that the law will have a chilling effect on free speech as there is little oversight and it’s left to interpretation what authorities define as “terroristic.”
Platforms that fail to comply with the law will be fined. The law, which will come into effect a year after it is published in the EU’s official journal, will be adopted by all member states.
The law has been in deliberation since 2018, when it was first proposed following an increase in terror attacks in the region. The law that was eventually passed excluded the removal of terrorist content meant for academic, journalistic, artistic, and educational purposes. Additionally, it excluded a provision requiring online platforms to monitor and filter terrorist content.
Still, some believe the law will have some unintended consequences, including a chilling effect on free speech. While the law does not require online platforms to preemptively monitor, flag, and censor terrorist content, the platforms might feel the need to do so to avoid having to deal with the one-hour deadline notices. To do that, they will probably use automated systems, which might end up censoring legal content.
“To say that [using automated filters] is not an obligation is to allow it,” Gwendoline Delbos-Corfield, a Member of European Parliament, from France, commented. She added that an automated content moderation system will be the preferred choice for online platforms because “the algorithm is going to be cheaper than human means.”
Perhaps the most significant cause for concern is the one-hour deadline, as it is too short for smaller platforms since they have fewer resources. These smaller platforms are preferred by terrorists because they hardly moderate content, either because of lack of such policies or lack of resources. Therefore, the law will end up favoring larger online platforms, which already dominate the industry.