Skeptics have long argued that big tech’s effort to offer you handy and to various degrees useful, while seemingly innocuous gadgets and apps that harvest your health data – had little to do with your own benefit.
According to more colorful scenarios, this growing subset of the tech industry was in reality designed to build up a large enough database to ensure that the tech giants’ enormously rich, now middle-aged, founders had a realistic shot at very long lives, or even nothing less than “immortality.”
In a more mundane but perhaps more likely, yet also perhaps even more damning scenario, these companies are working hard to harvest as much private and sensitive data as they can simply to meet their adopted business model’s logical conclusion: offering insurance companies, big pharma, prospective employers, credit agencies, or simply the highest – or the most convincing – bidder a very detailed, and very private, health and associated data profiles of millions of people.
Meanwhile, those arguing in favor of this kind of tech said its design, intent, and results were simply there to make the world a better place in the long run.
But now Motherboard is reporting that various health apps designed to beam their users’ private health information up to the mothership may in fact carry with them a great potential of working against their users.
The website cited a new University of Toronto study that “highlights privacy issues around health apps by examining how medicine management apps share personal user data.”
And most of the apps tested proved to be “sharing sensitive information like medical history and demographics with third parties” – with these including such major players as Amazon Web Services, Facebook, Google, Microsoft, and AT&T.
But as is the online advertising’s nature, “third parties” often turn to “nth parties” – and so the report points out that these entities “could, in turn, share data to digital advertising companies, and a consumer credit reporting agency.”
And while the onboarding process seems harmless, buried in legalese of the apps’ terms of service – the long term consequences for the user, an entity now made up of data and metadata and isolated from their real world context, may prove to be damning.
“I think we’re starting to see that people are discriminated against because of health conditions,” Quinn Grundy, assistant professor at the University of Toronto told the website.
Otherwise, algorithms are “making decisions” or they are making people the target of “very personal marketing related to their health condition that’s quite invasive,” Grundy said.