
Artificial intelligence has introduced a new risk to online privacy. Images that once seemed harmless or generic can now reveal much more than intended, even when users have taken steps to remove identifying data.
OpenAI’s most recent image-analysis tools, known as “o3” and “o4-mini,” no longer depend on embedded metadata to determine where a photo was taken. These models work entirely off visual information. They analyze subtle features like regional signage, architectural styles, menu typography, or the shape of streetlights to accurately identify a location.
The result is that even a carefully edited photo, stripped of location tags and metadata, may still allow someone to determine exactly where it was captured. The technology does not require permission or interaction from the person who posted the image.
On social platforms such as X, users have begun testing these models by submitting all kinds of images. Some are heavily filtered or blurry, while others appear entirely ordinary. The AI often responds with surprising accuracy.
…
Become a Member and Keep Reading…
Reclaim your digital freedom. Get the latest on censorship, cancel culture, and surveillance, and learn how to fight back.
Already a supporter? Sign In.