Clicky

Your credit score will soon be partially determined by AI monitoring your social media accounts

Credit score companies are some of the most privacy-invasive companies in the world. The amount of data they hold on people is shocking. And it's about to get worse.

Tired of censorship and surveillance?

Defend free speech and individual liberty online. Push back against Big Tech and media gatekeepers. Subscribe to Reclaim The Net.

The way in which credit scores are calculated is changing rapidly, from human assessment to computers, and more recently to artificial intelligence.

The change comes out of the desire to make credit scores more equitable. However, credit companies failed to remove bias.

Credit companies started using machine learning to offer “alternative credit”, as a way to make credit scores less biased. To do this, they use data that wouldn’t be normally included in a credit score to get a sense of a person’s trustworthiness. According to these companies, all data is credit data. Everything, including sexual orientation, political beliefs, and even what schools you went to.

Financial tech companies such as ZestFinance, Lenddo, SAS, Equifax, and Kreditech feed “alternative data” into their algorithms to generate credit scores, then sell their AI-powered systems to banks and other companies that use it for creditworthiness assessment.

Lenddo, for example, offers a Lenddo Score that “complements traditional underwriting tools like credit scores because it relies exclusively on non-traditional data derived from a customer’s social data and online behavior.” It even offers the option to creditors to install an app onto their phones to analyze their online searches.

In return, customers experience the illusion of agency. Users might think that if they search for “good” things on Google and connect with the right people and groups on social media, they could become eligible for a loan.

“It suggests in some ways, that a person could control their behavior and make themselves more lendable,” said Tamara K. Nopper, an alternative data and credit researcher.

Lenders argue that credit scores based on alternative data can benefit those who have been discriminated against and excluded from banking. Not having a credit, or having a bad one doesn’t mean a person is not trustworthy. “Let us mine your alternative data and we will think about it,” they say.

But scanning people’s alternative data looks a lot like Orwellian-style surveillance. Letting a company browse through all your digital footprints equals to giving up all your privacy in the name of credit.

Furthermore, using alternative information to generate credit scores could result in them being even more biased.

AI is fed with huge amounts of data in order to detect patterns and produce an output, such as a credit score. Good AI training results depend on a quality dataset, as the programs could assimilate the prejudices of their creators. For example, Amazon had to get rid of an AI-powered hiring tool trained on resumes after it showed signs of being biased against women.

Today, there are laws aimed at preventing discrimination in credit scores on the basis of color, religion, national origin, gender, race, age, and marital status. But for decades, the banks used predatory mortgages and loans on specific communities.

Experts say that the high rate of foreclosure on the mortgages “wiped out nearly $400 billion in communities of color between 2009 and 2012.” The companies that buy the debts and take people to court target colored people more than any other.

Researchers warned that AI programs could effectively reintroduce redlining and other types of bias for certain communities by feeding on deeply biased or incomplete data.

In 2017, researchers at Stanford University found that even an objectively “fair” algorithm could be injected with bias in favor of a specific group depending on the quality of the training dataset.

A 2018 study found that “face-to-face and FinTech lenders charge Latinx/African-American borrowers 6-9 basis points higher interest rates.”

Some experts worry that the use of alternative data might once again lead to a situation similar to the subprime mortgage crisis if predatory loans will be offered to marginalized communities once again.

Even FICO, the data analytics company that in 1989 introduced the FICO score – a three digit number associated with most credit scores today – recognized the dangers of over-relying on AI and alternative data. “[It] can actually obscure risks and shortchange consumers by picking up harmful biases and behaving counterintuitively.”

If you're tired of censorship and dystopian threats against civil liberties, subscribe to Reclaim The Net.

Tired of censorship and surveillance?

Defend free speech and individual liberty online. Push back against Big Tech and media gatekeepers. Subscribe to Reclaim The Net.

Read more

Share