Germany’s Network Enforcement Act (NetzDG), passed in 2017, is a piece of legislation heavily criticized by many civil and digital rights activists, legal experts, and journalists for promoting government censorship and restricting freedom of expression.
The act’s declared purpose is to fight online incitement and fake news, and it seeks to regulate content on social networks by forcing them to delete “obviously unlawful content” within 24 hours or pay fines up to 50 million euros. But it has come under sharp criticism both in the country and abroad.
When the news came that Germany would be passing a separate law to amend NetzDG, logic dictated that these changes would be made to address some of the criticism. Instead, it has been amended to become even more stringent.
The draft law building on the original legislation was passed on April 1 by the government led by Angela Merkel. The draft will now be forwarded to parliament for approval, and according to Justice Minister Christine Lambrecht, it complements the amendments to the NetzDG contained in a bill to combat right-wing extremism and hate crime passed in February 19.
A representative of the German government, Ulrike Demmer, said the changes reflected the belief that “criminal hate speech can be the breeding ground for real attacks.”
The amended law would “make it easier for users to report hate speech and access data” on platforms like Facebook and Twitter.
According to the same source, Merkel’s cabinet took this action in response to several mass shooting incidents in Germany. Social networks will now have to provide “additional information on the origin of hate speech sought by users challenging it in court.”
Minister Lambrecht, who is behind the bill, frames it the same way as strengthening user rights related to defending themselves from hate speech published online.
The amended law aims to make it easy for users to flag content they view as hateful or threatening, while if the case makes it to court, social networks will have to disclose data about the authors behind such content.
Social media companies will also have to present “transparency reports” twice a year that will provide “anonymized data for scientific purposes.” The purpose here is to learn which groups online are targeted with hate speech, and if this is done in a “coordinated” manner.