Fight censorship and surveillance. Reclaim your digital freedom.

Get news updates, features, and alternative tech explorations to defend your digital rights.

AI Gun Detection Error Leads to Armed Police Detaining Baltimore Teen Over Doritos Bag

A harmless moment became a police takedown, revealing how AI can turn ordinary kids into instant suspects.

Close-up of a wall-mounted surveillance camera with a prominent lens reflecting vivid pink and blue neon light beams against a glossy tiled background dotted with small illuminated squares.

Stand against censorship and surveillance, join Reclaim The Net.

Concerns over artificial intelligence in school security are mounting after a Baltimore teenager was detained at gunpoint when a computer vision system mistook his snack for a firearm.
The episode, which unfolded outside Kenwood High School, has fueled public unease about the expanding use of AI surveillance in everyday settings.

Sixteen-year-old Taki Allen had just finished football practice on October 20 when several police cruisers raced toward him and his friends.

“It was like eight cop cars that came pulling up for us,” Allen told WBAL-TV 11 News. “They started walking toward me with guns, talking about ‘Get on the ground,’ and I was like, ‘What?’”

He said officers forced him to kneel, cuffed him, and searched his pockets. Only later did they show him the image that had prompted the confrontation: an AI-generated alert that had flagged a crumpled Doritos bag as a weapon.

“It was mainly like, am I gonna die? Are they going to kill me?” he said. “They showed me the picture, said that looks like a gun, I said, ‘No, it’s chips.’”

The technology behind the false alert was part of a gun detection program developed by Omnilert, which Baltimore County Public Schools adopted last year.

The system analyzes video feeds from school cameras and notifies police if it believes it has detected a gun.

Omnilert acknowledged that the alert was wrong but said it was still a sign the system “functioned as intended.” The company defended its product, claiming it “prioritizes safety and awareness through rapid human verification.”

School officials echoed that message in a letter sent to families, assuring parents that support services would be available. “We understand how upsetting this was for the individual that was searched as well as the other students who witnessed the incident,” the principal wrote. “Our counselors will provide direct support to the students who were involved.”

The event reveals the hazards of letting AI surveillance guide police action without adequate oversight.

The Baltimore incident fits neatly into the same expanding surveillance framework already spreading through American schools, the same logic that led a Tennessee middle schooler to a holding cell because a content filter couldn’t grasp a joke.

Whether the algorithm is crawling through Google Docs or watching from a hallway camera, the result is the same: automated systems making human mistakes, except with police attached.

In both cases, technology that was sold as “proactive safety” produced panic and punishment instead.

Gaggle flagged a phrase without context; Omnilert flagged a chip bag without a weapon.

Each system insisted afterward that it “worked as intended,” which might be the most revealing admission of all. The problem isn’t that these tools malfunctioned; it’s that they performed exactly as designed, handing human judgment over to code that cannot tell danger from drama, or a gun from Doritos.

If you’re tired of censorship and surveillance, join Reclaim The Net.

Fight censorship and surveillance. Reclaim your digital freedom.

Get news updates, features, and alternative tech explorations to defend your digital rights.

More you should know:

Share this post