The latest figures from German law enforcement offer a stark warning about the dangers of expanding surveillance under the EUโs controversial Chat Control proposals.
In 2024 alone, nearly half of all tips Germany received through the existing voluntary scanning system were false alarms.
According to the Federal Criminal Police Office (BKA), 99,375 of the 205,728 reports forwarded by the US-based National Center for Missing and Exploited Children (NCMEC) were not criminally relevant, an error rate of 48.3%. This is a rise from 2023, when the number of false positives already stood at 90,950.
Many of these reports are generated by private tech companies such as Meta, Microsoft, and Google, which scan usersโ communications voluntarily for possible child sexual abuse material (CSAM) and pass them to NCMEC.
Under the current โChat Control 1.0โ framework, this system does not apply to end-to-end encrypted services and is not mandatory. Even within those limits, the system is flooding police with inaccurate data.
As the European Commission pushes forward with โChat Control 2.0,โ which seeks to make CSAM scanning mandatory and expand it to encrypted messaging platforms, the 2024 BKA report raises critical doubts about the approach.
The proposals would effectively ban end-to-end encrypted messages under the guise of child safety.
Yet, if nearly half of the reports under a limited, voluntary regime are false, the implications of scaling it up further are serious.
Mass surveillance is being implemented to combat abuse, even when the mechanisms of detecting abuse are inaccurate and people’s personal content is being falsely flagged and forwarded to authorities.
A mass expansion would not only risk overwhelming law enforcement with irrelevant data but would also undermine secure communication across Europe by forcing providers to break encryption or insert client-side scanning mechanisms.
Beyond the practical failures, the invasive nature of Chat Control proposals cannot be ignored.
By pushing for real-time monitoring of private conversations, the policy targets the core of personal privacy.
The assumption that every user is a potential suspect leads to widespread, suspicionless scanning, raising concerns that go well beyond the fight against abuse.