“Algorithms are opinions embedded in code”

Tech companies have taken over the power to make decisions for us. That can be convenient as long as it concerns playlists or navigation. However, under the guise of “objectivity”, their algorithms also categorize humans and reinforce social inequality. 

When we moved to New York City in the middle of the January 2018 snowstorms, we were hit by an algorithm. After offering us three identical apartments – hideous overpriced studios in a shabby high rise that had been on the market for months – our broker shrugged. “That’s all I have for you”, he said offhand. “If you had a credit history, well, that would be a different story. At the same price, I could show you plenty of well-kept apartments down there.” He pointed out the window at the brownstones of Hamilton Heights that I had been looking at all along.

Targeted advertisement gone wrong (screenshot)
Targeted advertisement gone wrong (screenshot)

A credit what? Oh my, I should have seen this coming. Algorithms like the one behind the US credit score already rule many aspects of our everyday lives – although they are nowhere to be seen. We only recognize them if they hit us or if they fail – you all know these situations. Early in my pregnancy, before I had told my friends and family, an ad popped up on my computer: A giant moose standing on a woman’s belly, ironically advertising smarter “ways to lose stubborn belly fat”. Apparently buying a bathroom scale on Amazon (while being careful enough to google anything pregnancy-related in private mode) had landed me in the “dieting” category.

“That algorithm had a very bad day”, I said to myself and laughed it off as I often do. Because despite all the data mining I felt that I had little to worry about, a sign of privilege as I understand now. What if an algorithm gets an early pregnancy right and alerts her health insurance or employer? Or if it inadvertently lets her parents know before she can do anything about it – yes, this happened.

Algorithms simplify our life – or turn it upside down

Tech companies have taken over the power to influence the basic functions of society. Some of their algorithms control what kind of information and political advertisement we are fed, thereby enclosing us in a filter bubble that reinforces what we already believe (confirmation bias), one of the reasons for the increasing divide in our societies. Proponents argue that algorithms are making our lives easier, at least for most people most of the time: They sort out spam e-mails, propose our favorite songs on Pandora or the safest way to get to work by bike. This reveals a utilitarian perspective, as Abeba Birhane has pointed out, according to which the impact on minorities can be ignored – they become “collateral damage”.

That impact can be devastating. Because the most influential algorithms do not categorize e-mails and songs but us – human beings. They calculate our alleged probability to become a risk for a company or a government, and their verdicts can have life-changing consequences: Will you get a loan to go to college? Will your family be investigated based on child welfare concerns? And how are police officers deployed in your neighborhood? “[Algorithms] separate the winners from the losers”, the mathematician Cathy O’Neil said in a TED talk. She has written algorithms for US hedge funds and companies herself – until she quit in disgust. Today, she raises awareness of the inner working of the formulas that she calls “Weapons of Math Destruction” (WMD’s) in her homonymous book (2016, Crown Books: New York).

I am a walking number (a low one)

After being brushed off by our own broker in that forlorn Manhattan high rise, I realized the following: For insurance companies, real estate agencies and many employers, anybody living in the US is a walking number. Precisely, a number between 300 and 850: our TransRisk credit scores. FICO, the Silicon Valley company that produces the most influential of these scores, promotes its algorithm as a way toward objectivity and fairness on its website: “By removing bias from the lending process, FICO has helped millions of people get the credit they deserve.”

A remarkable statement because it shames those with low scores – overwhelmingly people of color – as less deserving, and because it ignores the consensus that algorithms are indeed biased. “Algorithms are opinions embedded in code”, O’Neil says in her TED talk.  Or, as she writes in her book: They “[encode] human prejudice, misunderstanding, and bias into the software systems that increasingly [manage] our lives.” Marketing them as objective is whitewashing.

“Algorithms are opinions embedded in code”

In the US, the mean credit score of white people was twice as high as that of black people in 2007 (c) Federal Reserve Board, Report to the Congress on Credit Scoring, p. O-25.

Only a handful of data engineers know how those vitally essential credit scores are being calculated, but it comes down to the following: The more we consume (taking credit out with one – or better two or three – US credit cards) and pay back immediately, and the longer this has been going on, the higher the score rises. This system is no problem for (and easily overlooked by) wealthy locals, but bad news for recent immigrants or poorer households, who may have to go into debt over a single, unexpected repair or medical bill.

Algorithms tend to discriminate against poor people, people of color and women. Not only are poor people subjected to more scrutiny and suspicion by algorithms. They also lose out on education and job opportunities because their online profiles tend to be less complete or curated as Michele Gilman and Rebecca Green have shown. At the same time, poor people have fewer resources and agency to mitigate or even challenge unfair outcomes.

The mean credit score of black people in the United States was half that of white people in 2007  according to a study by the Federal Reserve Board (see picture). They are also more likely to be “credit invisible”, i.e. to have no score at all – which is as harmful as having a low one. That affects one in ten adults in the US, and as I found out in that high rise, I was among them. Unaware of the scoring system, which does not exist in most countries, I had used my German credit card and not seen the necessity to get a US card. My husband’s excellent score did not help, neither did my previous rental history or a letter my employer provided. The real estate agencies we dealt with required each household member to have a good or excellent score.

Low scores, third-class citizens, even lower scores in the future: a typical AI feedback loop

If you have no score or if you are poor with a score of, say, 345 – because you took time to pay back loans you took out for essential purchases, you will be treated like a third-class citizen: Your applications for some jobs will not even be considered, you will have to pay extra to rent an apartment, insurances will raise their premiums, banks will offer you predatory loans with high interest rates and you will even have difficulties to acquire a credit card. All of this will affect your score negatively again, trapping you in a circle of poverty. This is what Cathy O’Neil calls “feedback loops”. She writes: “Instead of searching for the truth, the score comes to embody it.”

Broadway in Harlem, NYC, during the first snowstorm of 2018: A low (or no) credit score can trap people in a circle of poverty (c) Christina Felschen

Looking for an apartment was one of the few instances where I realized what it means to end up in the “undesired” category. I felt ashamed, apologetic and angry at the broker all at the same time. (Yes, at the poor insensitive broker – because I do not know yet how to be angry at an algorithm.)

However, I was still lucky: As a white European, I am privileged. Although I have been living as a foreigner in the US for the last five years, I have barely been discriminated (if not for being a woman, which is a different story). Unlike people of color and most other migrants, I only experience inconveniences: That guy at the gas station in California’s Central Valley who overheard me and my partner talk in German and came over shouting “speak English, this is America!” was rude, but we did not have to be afraid that he would call immigration enforcement on us. When a Stanford student at a party stopped talking to me after his first question – which was, which school I went to –, I could still tell myself that he was plain silly to think that my European alma mater must be worthless just because he didn’t know it. And when I can rarely ever apply for jobs in the US because my visa is always set to expire too soon, it disappoints me, but I am lucky enough to continue to work for my German clients. Such inconveniences are peanuts in comparison to what people of color experience every day.

After dragging our suitcases through the snow from one Airbnb to another for two weeks, we found a small apartment in Hamilton Heights whose owners gave us two options: paying an agency the equivalent of a full month rent to fill in as our guarantor or paying a full year’s rent (gasp!) at the beginning of our lease. Had we not earned money in another wealthy country before, both options would have been impossible.

But why do so many algorithms discriminate and what can we do against it? Follow along, this will be my topic in the next post on March, 4th.

Header image: (c) Christina Felschen, New York, 2018

Have you ever been “hit by an algorithm”? Do you want algorithms to make decisions for you and everybody else – and if so, in what ways? Let us know in the comments below or on twitter, we’re happy to hear from you!

7 Comments

  1. Raero J Monteiro

    At this point it seems clear that the increasing employment of algorithms is a reality from which there is no turning back – whether we like it or not. But it is fundamental to raise those questions and underscore the human factor in both ends of it. People create algorithms and are affected by it in ways we are just starting to understand. From a communication for development standpoint, I believe it is essencial that we counter the discourse according to which algorithms are inherently “imparcial”, “objective” or “just” (while recognizing its great benefits). We have seen these claims of objectivity in different areas many times before, only to find that subjectivity, culture and power relations are always influential in some way. Your article is a reminder of that and a great read, congrats Christina!

  2. Thank you, Raero! Yes, that’s a good point. Algorithms are indeed great for analyzing patterns when there is lots of rather unambiguous data, such as for predicting sports matches or traffic jams. In those contexts, their use is totally scientific and ethical – we would not want to give up that kind of progress. It is alarming, however, when companies pretend that they can use the same pattern recognition logic for complex social realities where there is no appropriate data and infinite room for prejudice – especially if the resulting algorithms take their “decisions” in a secretive way and therefore cannot easily be appealed to. Luckily, there are more and more data scientists and legal professionals who are fervently protesting these human rights abuses – that will be my topic in the next post 🙂

  3. Pingback: The techie resistance - NUDGED

  4. Pingback: How Big Data is shaping the future of 8 areas - NUDGED

  5. Hi Christina,
    I loved your article on this super important topic!
    It wasn’t until I listened to the article ‘Black Communities Are Already Living in a Tech Dystopia’ by Jackson, J from the class reading list that I became aware of the discrimination that happens so clearly in the US and slightly less clearly elsewhere. It makes me worried that even though we put so much faith in technology that it will (and it does) embed current inequalities within the technical sphere. And I wonder what can I personally do?
    I particularly liked how you made it so personal on your own experience, how you reflected on your privilege and I found that the way you used the external linking made me click on nearly all of them – very clever and well researched!

    • Thank you for your kind words, Amanda! The worrying point for me is that inequalities are not only perpetuated by these algorithms but even increased manifold. I was alerted to this topic a couple of years ago by a friend who is a data scientist at Facebook – they were passing around Cathy O’Neil’s Weapons of Math Destruction” among colleagues. It’s good to see that there are critical thinkers within these corporations. Very good question what we – as non-data scientists – can go against algorithmic bias. I think educating ourselves and others is a good first step – and to push for regulations when the occasion comes (and it will).

  6. Pingback: The Informational Metropoles - NUDGED

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to Top