It’s not exactly a secret that artificial intelligence (AI) is excellent at pattern matching, but not so great at placing its findings in a proper context and understanding them.
That is why, for example, AI-based content policing is near useless – but a new product claiming to be harnessing the power of AI is looking to solve a seemingly less complex problem: determining whether a visitor of a website is a child or an adult.
The company behind the product now being tested is the London-based startup SuperAwesome, that specializes in what it terms as “kidtech” – tools and services allowing websites to comply with regulation aimed at protecting children from online tracking and marketing.
And the new age-recognition system now in the works would in a way complement SuperAwesome’s core business – as it would drive more websites to use “kidtech” to ensure compliance with legal requirements regulating children’s safety online.
Other than making sure nobody lies about their age on age-verification landing pages – and thus protecting companies from legal liability – SuperAwesome’s other goal, as stated by CEO Dylan Collins, is to protect children’s online data.
But there’s a lot of buzzwords, sales pitches and unrealistic promises and expectations around the AI industry today, even to the point of not all tech advertised as such actually being AI. The reason for this is that AI is an exclusive club only for those with very, very large amounts of data at their disposal.
Collins seems to acknowledge this in an interview for NBC News, but claims that his company has the data it needs do to legit AI business.
Effectively, SuperAwesome has been gathering this data, which Collins says is anonymous, from its products geared towards children – like PopJam and Kids Web Service – and doing it for years.
“It’s fair to say this is the accumulation of six years of building this company and everything we have seen in the universe of kids,” said Collins.
As for how the new system his company is developing works – in other words, how the AI is trained – he said it uses signals ranging “from the physical device to the nature of the content and how the content is being interacted with, to where on the screen is being tapped.”
Collins aims to license the system to websites, who would be able to put in place appropriate privacy policies once they trust that a visitor is a child.