On November 19, the European Union stands poised to vote on one of the most consequential surveillance proposals in its digital history.
The legislation, framed as a measure to protect children online, has drawn fierce criticism from a bloc of senior European academics who argue that the proposal, even in its revised form, walks a perilous line. It invites mass surveillance under a veil of voluntarism and does so with little evidence that it will improve safety.
This latest draft of the so-called “Chat Control” law has already been softened from its original form. The Council of the European Union, facing mounting public backlash, stripped out provisions for mandatory on-device scanning of encrypted communications.
But for researchers closely following the legislation, the revised proposal is anything but a retreat.
“The proposal reinstates the option to analyze content beyond images and URLs – including text and video – and to detect newly generated CSAM,” reads the open letter, signed by 18 prominent academics from institutions such as ETH Zurich, KU Leuven, and the Max Planck Institute.
We obtained a copy of the letter for you here.
The argument, in essence, is that the Council’s latest version doesn’t eliminate the risk. It only rebrands it.
The criticism is focussed on the reliance on artificial intelligence to parse private messages for illicit content. While policymakers tout AI as a technical fix to an emotionally charged problem, researchers say the technology is simply not ready for such a task.
“Current AI technology is far from being precise enough to undertake these tasks with guarantees for the necessary level of accuracy,” the experts warn.
False positives, they say, are not theoretical. They are a near-certainty. AI-based tools struggle with nuance and ambiguity, especially in areas like text-based grooming detection, where the intent is often buried under layers of context.
“False positives seem inevitable, both because of the inherent limitations of AI technologies and because the behaviors the regulation targets are ambiguous and deeply context-dependent.”
These aren’t just minor errors. Flagging benign conversations, such as chats between teenagers or with trusted adults, could trigger law enforcement investigations or platform bans. At scale, this becomes more than a privacy risk. It becomes a systemic failure.
“Extending the scope of targeted formats will further increase the very high number of false positives – incurring an unacceptable increase of the cost of human labor for additional verification and the corresponding privacy violations.”
The critics argue that such systems could flood investigators with noise, actually reducing their ability to find real cases of abuse.
“Expanding the scope of detection only opens the door to surveil and examine a larger part of conversations, without any guarantee of better protection – and with a high risk of diminishing overall protection by flooding investigators with false accusations that prevent them from investigating the real cases.”
Alongside message scanning, the proposal mandates age verification for users of encrypted messaging platforms and app stores deemed to pose a “high risk” to children. It’s a seemingly common-sense measure, but one that technology experts say is riddled with problems.
“Age assessment cannot be performed in a privacy-preserving way with current technology due to reliance on biometric, behavioural or contextual information (e.g., browsing history),” the letter states, pointing to contradictions between the proposed text and the EU’s own privacy standards.
There are also concerns about bias and exclusion. AI-powered age detection tools have been shown to produce higher error rates for marginalized groups and often rely on profiling methods that undermine fundamental rights.
“AI-driven age inference techniques are known to have high error rates and to be biased for certain minorities.”
Even more traditional verification methods raise red flags. Asking users to upload a passport or ID introduces a host of new risks. It’s not just disproportionate, the researchers argue. It’s dangerous.
“Presenting full documents (e.g., a passport scan) obviously brings security and privacy risks and it is disproportionate as it reveals much more information than the age.”
The deeper issue, however, is one of equity. Many people, especially vulnerable populations, simply do not have easy access to government-issued IDs. Mandating proof of age, even for basic communication tools, threatens to lock these users out of essential digital spaces.
“There is a substantial fraction of the population who might not have easy access to documents that afford such a proof. These users, despite being adults in their full right of using services, would be deprived of essential services (even some as important as talking to a doctor).
This is not a technological problem, and therefore no technology can address it in a satisfactory manner.”
The broader concern isn’t just the functionality of the tools or the viability of the rules. It’s the principle. Encryption has long been a bedrock of digital security, relied upon by activists, journalists, medical professionals, and everyday citizens alike. But once a private message can be scanned, even “voluntarily” by a service provider, that foundational guarantee is broken.
“Any communication in which results of a scan are reported, even if the scan is voluntary, can no longer be considered secure or private, and cannot be the backbone of a healthy digital society,” the letter declares.
This line is particularly important. It cuts through the legal jargon and technical ambiguity. If messaging platforms are allowed to opt in to content scanning, the pressure to conform, whether political, social, or economic, will be immense. Eventually, “voluntary” becomes the norm. And encryption becomes meaningless.
***
Interestingly, the European Parliament has charted a different course. Its version of the regulation sidesteps the more intrusive measures, focusing instead on targeted investigations involving identified suspects. It also avoids universal age verification requirements.
The divergence sets up a legislative standoff between Parliament and the Council, with the European Commission playing mediator.
Unless the Council’s draft sees significant revision, two contentious features, voluntary message scanning and mandatory age verification, will dominate the trilogue negotiations in the months ahead.
The academics, for their part, are urging caution before the November 19 vote. Their message is clear: proceed slowly, if at all.
“Even if deployed voluntarily, on-device detection technologies cannot be considered a reasonable tool to mitigate risks, as there is no proven benefit, while the potential for harm and abuse is enormous.”
“We conclude that age assessment presents an inherent disproportionate risk of serious privacy violation and discrimination, without guarantees of effectiveness.”
“The benefits do not outweigh the risks.”
In a climate where public trust in technology is already fragile, the Council’s proposal flirts with the edge of overreach. The tools being proposed carry real dangers. The benefits, if they exist, remain unproven.
Europe has often led the way on digital rights and privacy. On November 19, it will reveal whether that leadership still holds.








