Digital Racism

Digital Racism

Jonathan Cook:

Welcome to This Human Business, a podcast representing the movement to reform business culture, moving it away from its obsession with digital efficiency to find new ways of working.

Conventional business writing has a tendency to engage in what, in last week’s episode, anthropologist Yuliya Grinberg referred to as “entrepreneurial bluster” – the pretense that business executives and the organizations that they lead are exemplars of both economic success and social progress. There’s a whole lot of happy talk from businesses these days. If you believed everything that you heard from speakers on the conference circuit, you’d think that business is at the forefront of an idealistic revolution that’s ushering in a new golden age of digitally-powered human flourishing.

If you have the courage to peek underneath that golden veneer, however, you’ll quickly see that there’s little substance to give it shape. This year, I’m beginning the podcast with a number of critical episodes confronting the pervasive problems that are making it difficult for authentically human business practices to take root.

It’s not easy listening, but if we’re serious about making business more human, and not just using “human business” as a new catchphrase to put a pretty face on abusive business practices, then we have to pay attention to these issues.

Next week, the podcast will explore some more positive territory, but this episode confronts a problem that gets to the heart of what’s gone wrong under the domination of digital commerce. This week’s episode is dedicated to the escalating crisis of racism in business.

When I refer to racism in business, what exactly am I talking about? There are different kinds of racism, and the classic, blatant kind of racial discrimination is one of them. Yes, there are still plenty of people in business who use racist slurs, and hire, fire, and promote people on the basis of ethnic identity. It’s horrible that we still need to devote resources to confront this kind of racism, but we do.

With the digital transformation of business, however, a new kind of racism, digital racism, has developed. That’s the kind of racism that this episode will be discussing, because it’s something that we’re just becoming aware of, something that neither business culture nor society in general has figured out ways to deal with.

The special challenge with digital racism is that, as with too many digital problems, the processes that foster and protect it are hidden from the view, not just of outsiders, but of tech insiders as well. As automated machine learning routines become more pervasive, it becomes increasingly difficult even for the people who craft the algorithms of artificial intelligence to pinpoint exactly where the problem is coming from.

There have always been mechanisms of racism that are hidden from general view, but digital racism is systemic racism on steroids.

Jordan Wright, senior designer at LPK, explained to me that when we’re talking about racism in the new digital culture of business, we need to keep individual and collective responsibility and simultaneously in focus.

Jordan Wright:

I think a lot of it is really subconscious, and a lot of people don’t like to face the fact that they have a little bit of bias even if it’s just a little bit. Everyone does, and when companies on that scale do it, it’s easy to be like oh my gosh, point out. We can throw stones at them all day, but then, it needs to trickle back down to that single person who, like we have to check ourselves if that makes sense. So like, I think it takes a lot of people to get to that big, underlying, systemic like racism in a company like Oracle or Google, like it didn’t it wasn’t just one person, if that makes sense.

So I like to look at it as, if we kind of bring the story back down to an individual level, we’ll see how it got so big, because one person didn’t just push that up the pipeline. It was a lot like a group of people who had that same kind of mindset.

Jonathan Cook:

It’s a new kind of problem we confront when we’re discussing digital racism, because we face a twofold blindness. First, we have to overcome the social blindness to racism that’s been an obstacle for a long time. Second, we have to overcome the blindness to problems in digital manifestations of our culture that’s developed over the last three decades.

Chuck Welch of Rupture Studio identifies the way that one form of this second blindness, the presumption that digital technology is objective and therefore lacks bias, contributes to the development of digital racism in business.

Chuck Welch:

Humans are humans, man, so there’s always been tribalism since the recorded time, whether it was religion or whether it was, you know, certain races or certain genders there’s tribalism in everybody. I think people think algorithms or technology is unbiased and that’s a fallacy. It’s just like saying data is unbiased.If I’m a coder or a software developer I bring my bias right in to the development of my software or if I’m a data analyst or a data wonk the way I develop a study is biased the way I interpret that data is biased.

Jonathan Cook:

Chuck brings up a troubling point. Racism isn’t a technological problem, at its heart. It’s a human problem. The problem with digital racism is that it amplifies human racism to a new scale that is mind boggling in its vastness, beyond the ability of any single person to imagine, much less to confront. When we add onto this multiplication of racism the habitual blindness to the problems created through digital transformation, we face a crisis of institutional racism beyond anything we’ve ever dealt with before.

Chuck is well aware of the history of racism in business, which he has been confronting for decades.

Chuck Welch:

You know, I’ve worked in advertising for many many years. I remember first coming to New York in ‘98 and Jesse Jackson had a what’s called the Rainbow PUSH Coalition partnership was like down on Wall Street. I went and they were talking about diversity and inclusion. It’s exactly the same conversation 20 years later. Man, I don’t know. I don’t see much change. There’s a lot of push for women, and rightfully so, but they’re kind of skipping over the ethnic conversation and I don’t hear it much.

Black people like me and other men of color man they are not siting in the seats and in the boardroom. They’re outside of it, so we often make these kind of black and white arguments, no pun intended, but the picture is much more nuanced when it comes to diversity and inclusion and who has power and who doesn’t. Because that’s ultimately what it comes down to. Who has the power to make decisions and create change? Who has the power not to make change?

If a client snapped his fingers, and said, “Look man, we want to see equal representation of our core customers in your agencies,” things would happen overnight. When a client snaps their fingers, you jump when you’re in agency world. So, I think clients need to put more pressure on these agencies.

Jonathan Cook:

It would be a mistake to indulge in the habit of presuming that a solution to racism in business, like all other challenges, can be found just by engineering a digital solution. The ugly truth is that racism, like many other problems in business, is allowed to spread because of the apathy of business leaders who have failed to identify the relevance of human problems to the bottom line of profitability. Chuck’s right that many instances of racism in business could be dealt with if business leaders cared enough to stand up and demand action.

Twain Liu, however, points out that there’s an even deeper structure of racism running through the digital skeleton of today’s business culture, a habitual bias that comes out of the binary forms of thinking that business orthodoxy demands.

Twain Liu:

The binary system originated, you know, with Aristotle. So, in his dualist logic paradigm he makes quite a distinction between what is true and what is false, what is valid and what is good and universal, what is bad, and what is male and what is female, and what is essentially somebody like him, who has a an elite position within the political forums versus somebody who is not like him who is female and a slave.

What Aristotle did was then he took that neutral objective scientific comparison between or between opposites and he actually used that to frame social relationships and to frame as the basis of what I consider to be his form of social engineering. Slaves to him were essentially just people who were who had been conquered and then were under the dominion of the conquerors, and so he also allocated them as zero or as being almost non-human in in a sense. That was a very elitist basis upon which he constructed his his logic, essentially his binary logic.

Jonathan Cook:

Twain shows us how the very structure of binary logic supports racist ideology, with its absolute division between ones and zeroes, between white and black, between acceptance and rejection, between master and slave.

This link between binary thinking and racism has deep history in our language, beginning with the word bi-nary, spreading into the term divide, meaning to split into two, discrimination, meaning to sort out those who are to be welcomed from those who are to be cast off in society, and finally device, the word for a tool that organizes the world by dividing things into two according to binary logic.

Mark Lehmann, Chief Technology Officer of Global Citizen, explains that the consequences of this deeply divisive world of binary data impact our individual identities. When our data is used to automatically place us into categories that define who we are, the results may be useful for businesses, but restrictive for us as individuals.

Mark Lehmann:

I think people are becoming a lot more aware of the value of themselves as individuals and the way that translates into a digital world, their data. Your data actually is you, and even if a person doesn’t think so, your chromosomal strand is data. That’s obviously you, but the whole thing about your personality, your likes or dislikes, the way you propagate your mental and heart force in the world is another. Just another form of data.

That’s precious because it’s you and it’s me. They don’t want to just manipulate you as an individual, but they also want to manipulate your digital self.

Jonathan Cook:

Professor Lauren Rhue is an expert in information studies who researches the way that the manipulation of binary data by digital businesses creates new manifestations of institutional racism.

Lauren Rhue:

My name is Lauren Rhue. I am currently an assistant professor of information systems and analytics at the Wake Forest School of Business. By the time this comes out later in the summer, I will be an assistant professor at University of Maryland in the Smith School of Business. I’m transitioning universities at the end of this academic year.

I have my PhD from NYU Stern School of Business in information systems, and how I came to this field is I was at Stanford for my undergrad in management science and engineering, and I was there from 2000 to 2004, which was a pretty exciting time. It was just after the dot com bubble burst and there was a lot of talk about, of course, entrepreneurship was very big, but it was tempered with this question of: What next? We had all of these amazing web companies come out and these web sites and then it all went bust. So now what? It was interesting.

From there, though, I decided I wanted to be a manager at a tech company, but I wanted to be back East. I’m originally from New Jersey, and I’ve always loved New York City. I grew up about an hour away from there. So, I came back home, and I was looking for jobs and I ended up doing data analysis before big data became what it is today. Back when I graduated, I said I wanted to analyze data for a living and people said, “Why would you want to do that? That’s so boring,” which is incredible now.

So, I worked in consumer analytics for an online advertising agency, and what struck me was how they took data and they could come up with these really interesting strategic insights. With my background my undergrad degree was, as I said before, an engineering degree, I wasn’t able to answer the questions in as much depth. I wanted to become an expert and so I looked at various programs, PhD programs, and said that information systems really has the most interesting questions. It’s all about the business use of technology, and at that point starting to get into data and Big Data.

Jonathan Cook:

Professor Rhue began her line of inquiry with a study of representation on Kickstarter.

Lauren Rhue:

One of the biggest shifts is this idea that technology is not neutral. Any time that you create any type of technology or you analyze data you make choices and those choices can then influence the results that you get. You may be doing this unintentionally. You may be doing the best analysis you think you are, you think you can do but that choice has unintended consequences.

I was working on a project related to Kickstarter and I was interested in asking these questions on bias. I think I mentioned before that the Luca and Edelman paper from Harvard Business School about bias in AirBnB was presented at an information systems conference in 2013, and that was exciting for me because my first thought is, “Oh we can talk about these things now.” I’m a black woman. I had noticed issues of bias in technology platforms in business settings but it was something that wasn’t interesting. It wasn’t necessarily well received. When that paper was presented it was just so exciting for me. All I could think of was now we can talk about this and I want to be doing research in that space.

So, I started to work on a paper for about Kickstarter, which was looking at bias on the Kickstarter platform. I was looking at founder pictures, and we had I think one hundred, two hundred thousand founder pictures, and it’s just too many pictures to look at, to manually go through and find the race of the people in them. I talked to a few colleagues and they told me that they use a facial recognition platform, Face++, to go through and categorize people by race. So I went through, I used that technology to categorize people by race, and what was interesting about that is having worked with Face++ across a couple of different datasets, I did notice there’s some systematic differences.

In a nutshell, we found that the race of the founder is the bigger driver in lower success on Kickstarter. We looked at the race of this subject, the fundraiser the race of the subjects in the campaign photo, and then we also looked at the racial nature of the campaign description. Even though it might not be discussing race, some topics are more closely aligned with one race than another. Something like gospel music or hip hop or Atlanta are more closely aligned with African-Americans, Anime and Japan are more closely aligned with Asians, and what we find is that even controlling for the topic and controlling for the campaign subject the fundraiser photo, if is there is a African-American in the fundraiser photo, there is lower success on average.

Jonathan Cook:

How does this kind of racial bias enter into a technology that isn’t even conscious, much less motivated by a prejudice against African-Americans? Though tools such as digital information systems don’t possess motivations themselves, their creators do, and the businesses that put digital services out into the world are culturally embedded. These systems aren’t as objective as we make them out to be. They’re infused with our subjectivity.

Lauren Rhue:

I think that this type of bias can enter into these types of technologies in a variety of ways. One is, of course, the training set data. When these models are “learning”, they’re doing it based off of a data set of images, and there’s been a lot of talk in the last few years that these images need to be diverse, because if every face that it learns is the face of someone who is from San Francisco maybe in the tech industry, if it’s all of Facebook’s employees, let’s say, that’s a sample set that’s 2 percent black, I believe.

So, you’re going to have a machine that has “learned” entirely on a very homogeneous corpus, and they’re not going to be able to recognize a different type of face or different types of ways of being. That’s part of it.

Another part has to do with labels. The machine doesn’t know what’s happy or unhappy. How it quote unquote learns is you have a lot of pictures of people smiling, and you say that’s what happiness looks like. A lot of pictures of people not smiling at you, this is what sadness looks like. Well, if there any type of cultural differences, or if some people perceive someone who’s black, who’s not smiling, as angry or afraid, then you’re going to see the machine pick up on that exact same bias. In my particular study I think that’s where the ambiguity comes in.

I think there just needs to be more human oversight, and that’s actually a paper I’m working on now is looking at this difference between subjective and objective measures. I’m looking at beauty. Now there’s an artificial intelligence program that is claiming to score beauty. Again it’s Face++. Beauty is inherently subjective. Different people will like different things. And I’m looking not just that is this is I’m biased, but then when people come in and make their decisions, and they know what the what the facial recognition program says versus people who don’t know what the facial recognition says, how does that influence their choices?

And well, what I’m finding so far is that for the objective measures, so age, asking me to guess the beauty contestants’ age, people can ignore facial recognition. If it’s inaccurate, people can ignore it. If it’s for a subjective measure like beauty what I’m finding is that people become more biased, because the facial recognition is actually more biased than the people who are, by biased, I mean there’s a higher disparity between darker skin and lighter skin beauty contestants for facial recognition than there are for people.

Jonathan Cook:

Professor Rhue has identified a host of terrible ironies. Digital objects are working to redefine our subjective human experiences. Emotionless computers are claiming the measure of our feelings. In the pursuit of the metrics of beauty, artificial intelligence is implementing ugly racist attitudes.

Lauren Rhue:

I feel like we need to think very deeply about when we use these types of subjective measures, because there’s no right or wrong answer, and we need to have people more closely aligned with not only the development of the technology, but when it’s used, and coming up with some type of mechanism to convey that again this is just another tool, so to speak. This can help guide people’s decisions, but sometimes it’s inappropriate for facial recognition or for any artificial intelligence to be the one making the decision.

So how can we get them to be more closely aligned? I think right now over the past few years we’ve seen this push into artificial intelligences are going to make all these decisions and they’re going to revolutionize business, but at the end of the day there still needs to be someone to manage it. Someone needs to actually use this as only an input and make the final decision and be able to be accountable for it.

Jonathan Cook:

Much of the racism that inhabits digital technologies is a consequence of the overreach of Silicon Valley. In the rush to profit from new technological gadgetry, businesses are trying to sell digital versions of every product and service imaginable, even when the tool doesn’t fit the job to be done. When businesses use machines to objectify human subjectivity, they reduce us down to the status of objects that can be bought, sold, and controlled. The mentality that motivates digital businesses to claim ownership of human data is frighteningly similar to the ideology that justified slavery, the ownership of human beings.

The racism that’s being implemented under the mantle of digital transformation is too vast for any single person to study. When I spoke to anthropologist Yuliya Grinberg recently, she directed me to the research of Safiya Noble.

Yuliya Grinberg:

Another work by Safiya Noble called Algorithms of Oppression, this latter book talks about how Google creates and encodes these kinds of racialized biases into its information systems. That’s not to say that this programming inequality into information systems is always necessarily nefarious. Oftentimes, it, I wouldn’t say necessarily comes from a place of decency, but of a place of perhaps some kind of ignorance of the way in which certain kinds of systems replicate existing racial biases that already, that’s around us in the real world.

When a woman of color is searching for, you know, doing a Google search for certain kinds of terms online certain specific kinds of images come up that correlate with racialized bodies that are not necessarily pleasant or positive or even acceptable for people to experience, but that’s one of the ways that race becomes, or at least I’d say racial biases become encoded online.

Jonathan Cook:

People tend to think of Google as wonderful resource of information, but Google’s dominance in the realm of online search cuts us off from much more information more than it reveals. Often, the information Google makes available to us has an appalling racial bias to it, as when Safiya Noble discovered that Google’s search engine presumes that women of color exist primarily as objects of sexual fetish, autocompleting searches with degrading results.

One instance of this kind of digital racism might be dismissed as just a glitch, but the pattern is much bigger than that.

In January, the MIT Media Lab discovered that Rekognition, the facial scanning technology developed by Amazon, mistakenly categorizes one third of dark-skinned women as men. The Human Interface Technology Laboratory in New Zealand observed that robots with artificial intelligence are being designed with white skin to resemble people of European ancestry under the racist presumption that a European appearance will make the bots feel more acceptable to human beings. Google’s facial recognition system infamously categorized African-Americans as gorillas, and this summer, it was revealed that racism is so thickly encoded in digital frameworks that a Google project to reduce racist speech on its platforms actually ended up categorizing and censoring African-American texts as more offensive than messages from European-Americans.

Affectiva, the Emotion AI company founded by Rana el Kaliouby, is now purposefully coding racist stereotypes into its Emotion AI algorithms, claiming that these stereotypes are an effort to improve its infamously inaccurate assessment of people’s emotional states. In a bitter twist, Affectiva’s racist system of organizing human emotion into ethnic categories undermines the validity of the very psychological theory it uses to justify its technology. The theory of basic emotions, and Affectiva’s coding framework that has been developed from it, relies upon the premise that everyone everywhere on Earth exhibits emotion in the very same, predictable, way. Once Affectiva shifted to the idea that there are distinct racial modes of emotional expression, it abandoned the theory of basic emotions, and the validity of its artificial intelligence system became invalid.

What can be done about the pervasive racism encoded in the artificial intelligence systems developed by digital businesses? Professor Rhue has a few suggestions:

Lauren Rhue:

I would like to see more federal regulation and oversight for these technologies. I think Microsoft is doing a wonderful job right now calling for federal regulation because when it comes to privacy, your face and your image is unique in terms of data that can be collected and gathered, and there’s been an explosion of facial recognition and video analysis. So, I would like to see it proceed with some regulation and make sure that it’s being ethically deployed to make sure that there is testing for these commercially available software programs to mitigate or reduce bias.

I don’t think it’s possible to completely reduce bias. Very often, you don’t figure out that something is biased until there’s some extreme example or some unexpected case, and then you realize, oh no, there’s a systematic bias in these technologies.

I think that in our excitement over adopting these particular types of technologies, we can’t lose sight of the human element. There is somebody who is going to be deploying the model, who’s going to be interpreting the results, who’s going to be acting upon it, and because of that we need to be very thoughtful about what is conveyed, how the data are trained, and how we can communicate that in the best way possible.

I do think that there is a natural limit to what artificial intelligence can do and we’re not talking about that limit because everyone’s so excited that the technology itself is being moved forward. Just like any new technology it makes sense for businesses to adopt them when they get to a certain point. I think you’re not going to see the productivity gains right away and I think in the meantime what they may find is they just don’t live up to expectations.

Or, there could be some type of illegal liability so it’s something else I was looking at are some of the legal implications of this bias. So, if you’re using it for hiring and it seems to systematically not like female candidates then that could be a problem, and I think it would take a couple of lawsuits or personal relations disasters. So I think that’s more of the risk than anything else. But that’s, again, for just generic business not for anything that’s mission critical like threat detection.

But, Apple had to deal with this in their stores recently. They have video surveillance and there was an inaccuracy in it, and their video surveillance popped up that one of their customers was wanted on an arrest warrant, and the police went to his house and Apple then was sued because this was an African-American and he said that he thinks that their facial recognition was biased and that’s why he was targeted, inaccurately targeted, as somebody who has an outstanding arrest warrant.

Jonathan Cook:

We keep on hearing that artificial intelligence boosts productivity and efficiency, but the racism that keeps on manifesting in machine learning systems should remind us to ask who is benefiting from the new productivity and efficiency, and at whose expense. It shouldn’t be surprising that Silicon Valley, an industry that makes riches for a small elite by assigning people into categories and then targeting them for commercial exploitation, develops technology thick with racial bias.

How can we overcome racism when the very economic and social systems that we’re working in encourage the construction of businesses that are designed as engines of inequality?

It’s necessary, if we are to reduce racism in business, to aim for a larger reform of business culture, to dismantle the underlying framework that enables racism to become automatically encoded in business technologies. This new vision must move us away from Silicon Valley’s greedy obsession with the categorization of human beings. We have to go beyond the value of initial, external appearances and learn to respect the diversity of individual journeys and the potential for transformations of character.

With this in mind, next week’s episode will begin an arc back up toward a positive vision for business culture, though one that still resides in the mysterious underground. The topic of next week’s episode of This Human Business will be the poetry and fairy tales of business. We need new metaphors and storylines to build a new kind of business.

Thank you to the people who helped me put together this episode of This Human Business, and thank you for listening.

The music that opens and closes each episode is from Meydan. The song is called Underwater, and it’s from the album For Creators.