Picture this: itโs rush hour in New York City. A guy in a Mets cap mutters to himself on the F train platform, pacing in tight circles. Nearby, a woman checks her phone five times in ten seconds. Overhead, cameras are watching. Behind the cameras? A machine. And behind that machine? An army of bureaucrats whoโve convinced themselves that bad vibes are now a crime category.
Welcome to the MTAโs shiny new plan for keeping you safe: an AI surveillance system designed to detect โirrational or concerning conductโ before anything happens. Not after a crime. Not even during. Before. The sort of thing that, in less tech-horny times, mightโve been called “having a bad day.”
MTA Chief Security Officer Michael Kemper, the man standing between us and a future where talking to yourself means a visit from the NYPD, is calling it โpredictive prevention.โ
โAI is the future,โ Kemper assured the MTAโs safety committee.
So far, the MTA insists this isn’t about watching you, per se. Itโs watching your behavior. Aaron Donovan, MTA spokesperson and professional splitter of hairs, clarified: โThe technology being explored by the MTA is designed to identify behaviors, not people.โ
And donโt worry about facial recognition, they say. Thatโs off the table. For now. Just ignore the dozens of vendors currently salivating over multimillion-dollar public contracts to install โemotion detectionโ software thatโs about as accurate as your auntโs horoscope app.
The Governorโs Favorite Security Blanket
This push didnโt hatch in a vacuum. Itโs part of Governor Kathy Hochulโs continuing love affair with surveillance. Since taking office, sheโs gone full Minority Report on the MTA, installing cameras on every platform and train car. Kemper reports about 40 percent of platform cams are monitored in real-time; an achievement if your goal is to recreate 1984 as a regional transit initiative.
But thatโs not enough. Now theyโre coming for conductor cabs, too. Because apparently, the guy driving the train might be plotting something.
The justification? Public safety, of course. That reliable blank check for every civil liberties withdrawal.
The Algorithm Will See You Now
Thereโs a strange and growing faith among modern bureaucrats that algorithms are inherently wiser than humans. That theyโre immune to the same messy flaws that plague beat cops and dispatchers and mayors. But AI isnโt some omniscient subway psychic. Itโs a mess of code and assumptions, trained on biased data and sold with slick PowerPoint slides by tech consultants who wouldnโt last five minutes in a crowded Bronx-bound 4 train.
US Transportation Secretary Sean Duffy threatened to yank federal funding unless the agency coughed up a crime-fighting strategy. And when Washington says jump, the MTA asks if it should wear a bodycam while doing it.
So the MTA submitted a plan; basically a warmed-over casserole of ideas they were already cooking. Only now with more jargon and AI glitter sprinkled on top.
Youโre the Suspect Now
The whole thing slots nicely into a global trend where governments outsource paranoia to machines. From South Koreaโs โDejaviewโ to the UKโs facial recognition fails to Chinaโs social credit panopticon, the race is on to see who can algorithmically spot thoughtcrime first. The problem? Machines are stupid. And worse, they learn from us.
Which means whatever patterns these systems detect will reflect the same blind spots we already have; just faster, colder, and with a plausible deniability clause buried in a vendor contract.
And while the MTA crows about safer commutes, the reality is that this is about control. About managing perception. About being able to say, โWe did something,โ even if that something is turning the worldโs most famous public transit system into a failed sci-fi pilot.
So go ahead. Pace nervously on the platform. Shift your weight too many times. Scratch your head while frowning. In the New York subway system of tomorrow, that might be all it takes to get flagged as a threat.