Meta is weighing whether to add face recognition to its camera-equipped smart glasses, and The New York Times obtained an internal company document that reveals more than just the plan itself.
It reveals how Meta thinks about when to launch it: “during a dynamic political environment where many civil society groups that we would expect to attack us would have their resources focused on other concerns.”
Read that plainly: Meta wants to release a mass biometric surveillance product while the people most likely to fight it are too distracted to respond.
The technology would scan the face of every person who enters the glasses’ field of view, building a faceprint to match against a database. Every passerby. Every stranger on the subway. Every person who happens to walk through the frame of someone else’s device. None of them consented. Most of them won’t even know they were captured.
Reclaim Your Digital Freedom.
Get unfiltered coverage of surveillance, censorship, and the technology threatening your civil liberties.
Faceprints are among the most sensitive data a company can collect. Unlike a password, a face cannot be changed after a breach. Once collected, this data enables mass surveillance, fuels discrimination, and creates a permanent identification trail attached to a person’s physical movement through the world.
Putting that capability into wearable glasses carried by ordinary people in ordinary places moves it off servers and into every room, street, and gathering that people enter.
Meta ran this experiment before and lost.
The company shut down (only kind of)Â its photo face-scanning tool in November 2021, simultaneously announcing it would delete (if you believe them) over a billion stored face templates. That retreat came after years of mounting legal exposure that produced a very expensive record.
In July 2019, Facebook settled a Federal Trade Commission investigation for $5 billion. The allegations included that the company’s face recognition settings were confusing and deceptive, and the settlement required the company to obtain consent before running face recognition on users going forward.
Less than two years later, Meta agreed to pay $650 million to settle a class action brought by Illinois residents under that state’s biometric privacy law. Then, in July 2024, it settled with Texas for $1.4 billion over the same defunct system. Nearly $7 billion across three settlements, all tied to face recognition practices the company ultimately abandoned.
The proposed smart glasses feature would repeat every one of those violations at a larger scale, with fewer possible defenses. Dozens of states now treat biometric data as legally sensitive, requiring affirmative consent before collection begins. Bystanders on a public street cannot give that consent. Meta cannot ask them. The legal exposure has already been litigated and priced.
The internal document is worth examining closely. Meta has assessed the civil liberties risks, acknowledged which organizations would push back, and concluded that releasing the product during a political crisis gives it a window. The company’s own document treats the controversy as a scheduling problem.
That calculation is wrong on the facts.
Public tolerance for biometric surveillance has contracted. It has taken years and much work, but people are finally starting to pay attention. Amazon’s Ring faced sharp public reaction when people understood that a feature marketed for finding lost dogs carried the architecture for mass biometric surveillance.
Each new application of face recognition in public spaces produces organized resistance, and the news of Meta’s internal reasoning will accelerate that response rather than preempt it.
The biometric surveillance risks are real and documented. The legal exposure is quantified across billions of dollars in prior settlements. And the company’s own strategy for managing scrutiny, waiting for advocates to be distracted, makes clear it cannot defend the product on its merits.

