The Business of Human Emotion

The Business of Human Emotion

Jonathan Cook:

Welcome to This Human Business, a podcast that explores the movement to reform commercial culture, giving voice to the value of the human experience in a time when the reflex to make everything digital is difficult to resist.

There is nothing more human than emotion.

The old belief that individual consumers or commercial marketplaces as a whole, are directed by rational self interest was debunked long ago. For good or for ill, emotion is what makes things matter. In business, it’s emotion that builds brands capable of escaping the trap of commoditization. The business that understands consumer emotions understands how to form passionate attachments in the marketplace.

If emotion is so important, as good businesspeople, we know what to do, right? We’ll gather as much data about emotion as we can, establish some operationalized definitions of emotion that enable quantitative measurement of our companies’ emotional performance, and integrate those measures of emotion into our analytics dashboards, so that we can implement rigorous management of emotional factors, guided by our data scientists.

Not so fast. It turns out that measuring and managing emotion is not that simple.

If you seriously want a scientific approach to managing emotion in business, then you need to confront this troubling reality: Scientists don’t even have a consensus about how to define what emotion is. Without such a definition, there cannot be any basis for a valid scientific study of emotion. Scientists can study tiny, isolated bits and pieces of human experience that are related to certain aspects of particular emotions, but they can’t get to the heart of the matter. They can’t get their instruments to bring the whole of any emotion into focus. Scientists can’t build a professional, reliable set of methods for the research of emotion without agreeing on the fundamental identity of what they’re studying.

From a humanist perspective, it’s easy to see why scientists have such trouble studying emotion. Emotion is inherently subjective, and every time that scientists try to reduce emotion to empirically measurable objective signs, their studies have already failed, because they’ve created an operational definition that doesn’t match the actual subject of study.

Traditionally, conventional business culture refused to accept that the study of emotion in commercial settings was a valid area for serious investment. Recently, that’s begun to change.

Fateme Banishoeib’s professional interaction with emotion has mirrored the journey of business as a whole. She began her career as a scientist, a chemist who entered management in a pharmaceutical company. As her experience grew, however, she discovered that scientific rigor was only one part of the skills that were necessary for the establishment of sustainably successful business practices. She came to notice that the absence of emotional factors in business created operations that were out of balance.

Fateme Banishoeib:

My view on the world of business or the business models as we operate them changed completely when I started writing poetry, and the reason is because I literally started tapping not only the rational side of my brain but also the more intuitive and imaginative side of my brain. So, I basically started pairing the rational with the emotional, and this is actually what I think is missing in the world of business, with so much focus on the rational, that we neglect completely the emotional I mean to the point that we are we say leave the emotions out of the door. We cannot do that. We can only suppress emotion. We can’t just switch them off. We can only suppress.

Jonathan Cook:  

We can only suppress emotion, Fateme learned. Suppressed emotions, psychologists warn us, are not inactive emotions. When we suppress emotion, it is merely diverted, and it emerges in our actions anyway, in strange, often self-destructive ways that we are usually not consciously aware of. 

This principle is as true for business organizations as it is for individuals. When businesses suppress emotion in the effort to become rational, they don’t stop being emotional. Their corporate culture becomes emotionally toxic. 

This is the twisted emotional dynamic that was witnessed by Roberta Treno.

Roberta Treno:

I’m Roberta Treno. I’m Italian. I’ve been living in Lisbon for the last 13 years. I’ve been working in the corporate world for almost 10 years.

When I was in the organizations, I saw a lot of sadness a lot of uncomfort for a lot of pills going around them people breaking down and they were not places of joy. They were not places where people were happy or they could find something joyful inside and I mean by joyful not just the euphoria or the light the sparkling moment of pop up parties, but really the the joy in what they were doing and the joyous thing within a team. So I think joy is the key. The lens is to see the realities in a different way. That’s why as I saw a lot of pain within the companies I think joy is kind of the antidote. You can you can contrapost to this suffering, to this pain. I’ve been really seen and I was myself close to a breakdown for nervous breakdown many times.

Jonathan Cook:

Roberta Treno chose to take action against the anguish created by the corporate world’s supposedly rational pursuit of profit. She found an antidote, and joined with others to build the Joy Academy, an organization that asserts the importance of positive emotions in business life.

Vasco Gaspar is engaged in similar work, and argues that emotional hygiene is as important to business health as brushing our teeth is to individual physical health.

Vasco Gaspar:

I ask how much of them in the room earn money because of physical strength, and normally no one raises the hand and then how many of them earns their money because of their good mental capacities or relationship capacities and so on. Everyone raises the hand, and I ask them how many of you have physical hygiene practices like taking a shower brushing your teeth, and everyone laughs. That’s so obvious and stupid question and everyone raises and almost everyone. But when I ask how many of you have a mental and emotional hygiene practices most times people don’t understand even what that means.

Jonathan Cook:

The tricky thing about emotion, Vasco discovered, is that it must be felt directly. Measuring it externally and managing it remotely isn’t enough.

Vasco Gaspar:

Some of these things you need to experience. They are somatic. You need to feel them in the body. I once was with a client, a big bank in Portugal. This was a one hour workshop, kind of just a taste for a workshop, and we did an experiment. It was one or two minutes of silence, just allowing yourself to focus on the breath and just stay one or two minutes in silence, and in the end is this guy with the age of 50 something, he said something like, well I never felt this peace since I don’t know 20 years ago, so this is the first time in 20 years I felt so peaceful. And it was just one or two minutes which was not that much, but he needed to feel that in the body.

Some things, you need to face them. It’s like how to explain to someone the taste of an apple. It’s sweet, or I don’t know to explain it. You need to taste it. You can write a Ph.D. on apples, but you need to taste an apple to know what what’s the taste. 

Jonathan Cook:

Vasco has grasped something vital about emotion. Emotion isn’t a physical attribute that’s out there in the world. It’s an experience, like the experience of taste. The taste of an apple isn’t the same as the chemical interactions of the molecules in the fruit with receptors on our tongue. Taste is a conscious experience that is constructed in the brain, and felt there.

Emotion is the same way. It’s not just an expression on a face. Digital engineers, however, lack the tools to grasp experience itself. So, they’ve built systems using the tools that they have in order to try to measure emotion computationally.

Can such projects work? Twain Liu, an expert on artificial intelligence, doesn’t think so. The gap between human emotion and robotic technology became apparent to her when she met R2D2.

Twain Liu:

I’m Twain. I am a systems inventor. So currently, I am developing a universal classification system that enables people to self-define and then share their definitions of their perceptions of things in the world.

My interest in and with AI started very early and so much as I absolutely loved watching Star Wars, so I was enamored of the robots in Star Wars, and so in 2016, when I actually managed to go to ComicCon and met R2D2, actually there were at least a dozen of him in the room, and he kept beeping his little noises and running up behind me and trying to surprise me.

So, for me, these questions of, do we understand each other in terms of between cultures, and then, on top of that, does the machine, is a machine capable of understanding us, in terms of the inner it would be able to translate across multiple cultures, and with that plurality of meaning?

Jonathan Cook:

Twain asks the most important question of our times: Is our emotion something that can be understood by a machine? Can the plurality of our cultural frames and emotional experience be measured so easily? Is our humanity something that can be translated into code?

There certainly are things about us that can be quantified and encoded. From the size and shape of our bodies, the molecular code of our genetics, there are many aspects of our bodies that can be appropriately digitized. Is emotion like this?

Chuck Welch of Rupture Studio works across cultures, and sees things differently.

Chuck Welch:

Say, if you go to parts of South America or parts of West Africa people may get within your airspace, they may get right up in your face and have a conversation, where in America that’s considered off putting, because people like their personal space around them and these things are often, like I said, that’s an element of culture.

We start with the inner life of people and that’s something we always have to educate clients about. When they think of culture, they think about coming out of creative expression, things that you can see. It’s fashion. It’s music. It’s art. It’s all these different things. Well, we start in the inner world. What are those beliefs? What are those values? What are those tensions? What are those kind of unsaid or unspoken norms that drive human behavior? So there’s the inside, and then there’s the outward expression.

Jonathan Cook:

Emotional expression isn’t universal. Different people in different cultures show their feelings in different ways. 

Chuck has articulated a subtle turn in human experience that most people miss. Human factors such as culture and emotion are first noticed by other people in the form of external expressions. Their origin and bulk of their operation, however, remain unseen. They are external.

Scientists and engineers are well aware of the importance of using tools of measurement that match the subject of measurement. It’s such a basic notion that it seems ridiculous even to have to mention it.

No one would measure volume with a stopwatch, or distance with a scale. So why are digital corporations trying to measure our emotions with their machines?

It’s a well known aphorism in business that we can manage what we can measure. What too many business leaders don’t stop to consider is that inappropriate measurement leads inevitably to inappropriate management. The urgent impulse in business to measure everything in order to make it manageable has led to the use of measurements that don’t make any sense.

It’s called the McNamara fallacy: The idea that what can be measured must be relevant. During the Vietnam War, U.S. General Robert McNamara was obsessed with gathering data to such an extent that he lost sight of the strategic context of the data. He believed that so long as enemy body count and U.S. recruitment of soldiers were both kept at high levels, a U.S. victory would be inevitable. Because these factors were easy to measure, McNamara forgot to consider the possibility that things beyond simple measurement might be more important.

Silicon Valley has infected business culture with a new manifestation of the McNamara fallacy: The idea that whatever can be measured with quantitative digital methods should be given strategic priority. Digital transformation of practically everything has become such a fad that most business leaders have forgotten that when we shift away from physics into human experience, the match between the subject of measurement and the tools of measurement breaks down. So it is that engineers in Silicon Valley corporations have found it easy to sell their clients on the idea of trying to measure human emotions with digital tools that were made to measure physical objects.

These services are known as Emotion AI, and they represent the most absurd extension of the digital obsession into the business world. A huge number of startup companies, like Affectiva, as well as the dominant digital corporations, have come up with services that claim to be able to have computers automatically detect emotions by scanning a person’s face, or listening to the tone of their voice, or measuring the electrical conductivity of their skin. 

Emotion AI takes these physical measurements of people’s bodies, and then analyzes them using an outdated, thoroughly debunked psychological theory from the middle of the last century: The theory of basic emotions. The theory of basic emotions proposes that all of human emotional experience can be reduced to only a handful of genetically-determined emotional states. Some people say there are six basic emotions. Others say there are eight, or ten, or maybe twelve. The main idea, though, is that there are very few actual emotions, and they’re all quite simple, easily standardized, and predictable.

Is that what your emotional life is like — simple, standardized, and predictable? Is Emotion AI for real? Are these new digital services doing what they claim to be doing? Is what they measure actually emotion? 

Paying precise attention to the medium of digital measurement is essential if we’re going to understand the data that comes out of that measurement. It was decades ago now that Marshall McLuhan advised us that the medium is the message. Musician Mykel Dixon picks up this idea, and talks about the importance of bringing message and medium back into sync.

Mykel Dixon:

I’m a big believer that how you present information is just as important as the information itself. There has to be a congruence of method and message, and that, I like to place the method or the mode of delivery a little bit ahead of the content, so that they’re already in this space of they’re in an experience and then you’re dropping in the content bits and oh, wow, you’re drawing links for them but they’re feeling it before they’re thinking about it.

Jonathan Cook: 

Digital measurement tools are impressive tools for the tasks that they’re designed to accomplish. As Mykel points out, though, no single method of inquiry can answer every question. 

Digital devices are great at answering objective questions. Emotion, however, is not objective.

How can we measure emotion with tools that themselves feel no emotions?

Even if you could create a cold, objective technology capable of scanning people’s emotion from their faces that works, you would fail in your overall project by creating an experience that emphasizes machines over people, that seeks to gather data about emotion only to exploit it, rather than to engage in a relationship of mutual openness. You would kill empathy itself in order to create a system capable of, like a psychopath, delivering an unfeeling imitation of emotion in order to manipulate people, without caring about the consequences. 

Professor Lauren Rhue has studied multiple platforms of Emotion AI that claim to be able to read people’s emotions by scanning their facial expressions. She has concluded that Emotion AI is conceptually incoherent.

Lauren Rhue:

My emotions paper is looking at a subjective measure. There is no ground truth for it. We cannot talk about accuracy, and that can be problematic, because certain cultures, for example, don’t reflect happiness in the same way. You think about Americans we smile a lot, whether or not we’re happy. If you think about Russians culturally, smiling is not viewed in the same way. Even if someone is happy it might not manifest on their face. Because of that there is no way of really QCing the tool. So, if you use facial recognition to understand the emotion of the people in a video or in a picture how can you say whether or not the tool is accurate? You really can’t.

Jonathan Cook:

Professor Rhue echoes the ideas we heard earlier from Chuck Welch, who works as a cultural advisor to businesses, helping designers and executives understand that no, people are not all the same, wherever you go. People from different backgrounds have different experiences and expectations. They have different beliefs, and they have different ways of expressing emotion and interpreting emotional expressions.

Think about what would have to be true if, as the coders of Emotion AI insist is true, human beings are hard wired with only a tiny number of specific, narrow, predictable emotions, all of which have identical physiological footprints, no matter how people are raised, where they come from, and what they’ve experienced. It would mean that, actually, human culture is irrelevant, and that our experiences are irrelevant to the formation of our emotions over time. If what the people selling Emotion AI and the theory of basic emotion upon which it is based was true, the entire human species would be little more than a collection of a small number of crude, mechanical reflexes, without emotional nuance, identical and boring. It would mean that people are almost always what they appear to be at first. 

In such a world, it wouldn’t be worth anyone’s time to pay attention to other people’s feelings in any detail. A quick, skin-deep, face-value scan would be enough. This is the dismissive, insulting vision of humanity that Emotion AI companies like Affectiva, Amazon, Apple, and Facebook are selling. What makes this arrogant attitude particularly alarming is that among the businesses who believe in it are the most powerful corporations that have ever existed. 

Professor Rhue points out the absurdity of such a simplistic view of the human experience.

Lauren Rhue:

There’s been a philosophical debate about the nature of emotion. Can you really know the emotional state of another human being? A lot of the claims I think that are being made are predicated on this assumption that yes, if you walk into a room and you see somebody else, you will be able to tell how they’re feeling and that’s not necessarily the case at all. Another person might walk in and they would be a great actor or actress, and you can tell what they’re trying to project that you can’t necessarily know how they actually feel. And I think that what a lot of these companies are saying is that they might be able to tell the perception, and maybe people would agree with that, maybe not, but there’s a lot of culture and context that are embedded in any type of an emotional analysis. Emotions are by their very nature extremely subjective.

Jonathan Cook:

Can anyone truly know the emotional experience of another human being? Professor Rhue points out an obvious barrier that Emotion AI companies conveniently ignore: Human beings are dishonest. They’re experts at emotional pretense, displaying facial expressions that are often the opposite of what they’re truly feeling.

Let’s be honest about this: Even emotionally-sensitive human beings who have been studying human emotional expression often fail to interpret other people’s feelings correctly. Such failures are inevitable, but failed emotional interpretations of other people’s behaviors become more likely when our interactions with them are short and out of context. 

Emotion AI is based on the most fleeting and impersonal, automated interactions. So, rather than correcting human error in interpreting emotional expressions, Emotion AI makes the problem of poor assessment through superficial interaction even worse. If we want to get better at understanding people’s emotions, what we need is longer, thicker, more contextually informed interactions, human to human.

Lauren Rhue:

If someone is smiling, then they’re happy. If someone’s not smiling, they’re angry. But, when it’s not clear how they feel, when they’re not smiling quite as widely, maybe the person who originally labeled them or the people who labeled them said that if you’re black, I think you’re angry. I’m not sure, but I think you’re angry and if you’re not black, then I think you’re happy, or I think you’re happier even if you’re not smiling, as widely and so I think that’s the benefit of the doubt claim that ends up getting baked into the technology because it learns essentially from the data that it’s been fed.

Jonathan Cook:

When Professor Rhue talks about the encoding of racism in emotion scanning artificial intelligence systems, she’s not just speaking abstractly. This topic has been the subject of her research, and it’s a subject we’ll talk about more in depth next week, in a podcast episode exclusively dedicated to the topic of racism in business culture.

For now, what I want to point out is that there’s a fundamental commonality between racism and the push for automated digital technology that we’re told will be able to read emotions. Both racism and Emotion AI are ideologically committed to the idea that individuality and diversity within human populations are an illusion that should be ignored in favor of broad sweeping claims about genetically based types, whether they’re types of people or types of standardized, hard-wired behaviors. Emotion-detecting artificial intelligence schemes and racism also share the belief that people are best judged through quick, superficial impressions and frameworks of reflexive external judgments made without an attempt to understand where a person is coming from on their own terms.

Anthropologist Yuliya Grinberg didn’t want to follow this trajectory of quick, harsh dismissals of people’s perspectives on their own lives. So, when she wanted to understand the culture of digital technology startups, she went to live with the tribes of Silicon Valley.

Yuliya Grinberg:

I’m Yuliya Grinberg. I just recently received my PhD from Columbia University in anthropology. Anthropologists do not study dinosaurs. They study how people live and experience their life in different settings. So, I like to say that I’m an anthropologist of. You’re always an anthropologist of something. There is no really generalized anthropology. So, I’m an anthropologist of technology, and more specifically, of the way people use and create information systems in the contemporary United States. 

I think people who are less familiar with anthropology imagine anthropologists as always studying places that are really, really far away, you know, the least familiar to us living in the contemporary world, either historically far away, in a different time era, or physically far away, like Papua New Guinea, from let’s say the United States. So, on some level I also study places that are far away, just far away from us in terms of experience For most of us, although we are surrounded by technology and digital devices and all kinds of information systems, they’re still foreign to us and as unfamiliar in many ways as a very distant place, someplace far away from where we live.

My specific work is with developers of what what’s called wearable technology I’m sure you’ve heard the term wearables. It’s everywhere these days and could refer to anything from your iPhone Watch, I mean Apple Watch, to rings and sensors placed in socks and clothing and headgear, your mattress in your home and environment, you know, in a kind of way, an ambient sensor experience right. That’s now highly promoted commercially to collect all kinds of biometric environmental data about people’s lives. So, my work thinks about how the people who create these tools, how do they think about the kind of work they do and what kind of data is it that they produce as a result of their innovation.

Jonathan Cook:

What Yuliya found in digital startup culture was a custom of wild exaggeration of the capabilities of technology. Especially in the Quantified Self movement, she found, there was a propensity to brag about features that didn’t work, and to puff up technical specifications far beyond what startup owners knew was actually possible.

Yuliya Grinberg: 

There is a lot of entrepreneurial bluster around ways in which these systems create these objective mirrors of our lives. Wear Fitbit and a ring to correct management of your sleep, perhaps a water glass to monitor how much you drink and fifteen other devices, and eventually you’ll have a complete picture of how you live, what matters to you, how you feel oftentimes, and that kind of bridges into the question of emotions, because a lot of this technology is geared towards evaluating people’s psychological states.

Especially when you start to pool all these data sources together, you start to slowly create an ever widening and ever clearer image of a person about whom this data is being gathered. I am more interested in the opposite narrative. How is this image perhaps not as clear as crystal clear as they imagine regardless of quantities of data streams that are able to be added to the data lake or ocean of any individual’s life?

It becomes very difficult to immediately accept the proposition that the data that’s being collected about our biometric, or I mean perhaps physical activity, equals or represents people’s emotional state.

Jonathan Cook:

We have all been trained to consider the phrase “data-driven” as a synonym for objective. Data, conventional business culture believes, is about straightforward, unquestionable facts that lead data scientists to deliver certain, reliable insights to business.

In practice, quantitative data gathered through digital instruments turns out to offer a quite narrow view of reality, like what we see when we peer through a telescope at a scene close at hand. The precision of data can often be its downfall, especially when it is tied to the research objective only by a thin chain of tenuous premises that is linked together with threads of what a business client would like to be true.

Data-driven business, it turns out, is as rife with cultural bias as any other human practice. Its objective status is often little more than an insecure pose adopted by those who seek an authoritative voice that they have not earned. So, Emotion AI projects inevitably get tangled up in cultural differences, dismissing them as surely as colonial officers once did, insisting that, whatever the locals believe about the nature of their emotions, the Western digital authorities are prepared to set the record straight, and tell people all around the world about how they really feel. 

Yuliya Grinberg: 

If we accept that as a premise that the body is a social thing that perhaps has a different social life in the United States than it does in Europe, than it does in India, than it does in Asia, than it does in a different place in the world then we can no longer accept a biometric device as objectively documenting “an emotional state”  because it asks but it requires us to think about how we even socialize and think about emotions as social things.

Jonathan Cook:

Yuilya asks us to step back from the effort to translate everything we know into data, to remember that emotions are not universal things, like physical properties of the universe, that are identical from place to place, person to person, society to society. 

She discovered in the course of her research that many of the people working inside Emotion AI companies will acknowledge, though only in private, that their software systems aren’t really able to scan emotion in the way that customers are led to believe they can.

Yuliya Grinberg:

If you go to a conference, if you read a magazine, if you see advertising for a biometric device that know purports to measure various forms of emotions, you’ll hear that is a device that tracks certain kinds of emotional states. Privately, or perhaps not even so privately, but I would say internally, outside of public kinds of presentation and posturing, there is obviously wide disagreement about ways to monitor, what kinds of data even can correlate with different kinds of emotional states, acknowledgement and recognition that these are not trivial technical tasks, but of course also socially complicated facts.

Entrepreneurs are not naive or ignorant about the challenges of data analysis. There is a kind of duality. There is a need to present a certain kind of image about what these devices do, and of course a very, I wouldn’t say necessarily clear, but a very present understanding that these are not in fact obvious or even necessarily feasible realities.

I don’t take the kind of information that you receive from self-tracking device as gospel, and I think that having spoken to a lot of people who are just that and use these kinds of technologies, there’s also, people are not naive. There’s a kind of understanding recognition that these are these tools have limitations. Even between themselves, they differ. 

One of the things I did as part of my research was participate in this group called The Quantified Self. The Quantified Self is often discussed as a kind of group for enthusiasts of digital devices. Often it also can be people who are themselves innovators or developers or engineers working this field, and also a space to share passion but also share ideas. There, you’ll see very clearly an expression of a recognition of the fact that these tools have limitations. They differ in ways, even the same, let’s say a step tracking device, one step tracking device will differ from a different tracking device of a different brand. So, I think there is probably a wider acknowledgement about this privately.

Jonathan Cook:

How absurd has the industry offering digital scans of our emotions become? There are now Emotion AI companies selling devices that their inventors promise will be able to detect and interpret the emotions of house pets.

A few years ago, a company called Kyon began selling dog collars that it said would be able to translate basic physical signals, such as heart rate and respiration, to tell owners what emotions their dogs were having. It’s not difficult to identify the weak point in Kyon’s sales pitch: No one has ever been able to have a coherent conversation with a dog in which the dog communicated about its emotions. Dog owners often believe that they can understand their pets’ feelings, but there’s a great deal of anthropomorphism going on. No one really knows what emotions a dog feels, because emotions are internal states, not simple matters of heart rates.

Recently, Kyon has been forced to retreat on its outlandish claims of canine Emotion AI. It now sells its dog collars as trackers of dog location, with a little bit of information about physical activity mixed in. Kyon’s engineers appear to have finally acknowledged that they really didn’t have any basis for their claim of digital insight into dogs’ emotions.

Emotion AI for humans is plagued with unreliability as well. When Professor Rhue analyzed the Emotion AI systems from Face++ and Microsoft, she found that the two different software packages described people’s emotions differently. That’s like having one brand of thermometer tell you that it’s 30 degrees outside, while another brand tells you it’s 40 degrees. When similar discrepancies happen with Emotion AI measurements nobody can really say which measurement is correct, because there isn’t any firm, agreed upon framework of what an objective measurement of emotion even looks like.

Lauren Rhue: 

I think that there is something just inherently part of the human spirit that people have different perspectives on things and so there’s going to be subjective preferences. I think the problem is that when you try to aggregate it up to say well, here is what this looks like en masse, and it’s not going to be the same way for everybody. But the way in which technology presents their analysis it’s very rarely, the confidence interval, the level of uncertainty is very rarely presented with the result. 

So, you have something like the beauty score, where it’s just a score. It’s let’s say 60 out of 100, and that’s how beautiful somebody is. We all know that reasonable people are going to find different people pretty and yet this score is out there. The same thing with emotion. If you look at the correlation, interestingly enough, along with I think it is Amazon facial Rekognition, I had Face++ and Microsoft, and this wasn’t in the paper, there is a surprising surprisingly low correlation for some of these, and so that makes perfect sense because the data that they were trained on are going to be different. The labels that they have are going to be different and emotion is something that is largely subjective. Different people are going to view happy, the definition of happy, what that looks like on your face, as very different, and I think that by putting this on technology we’re ignoring some of the rich cultural differences that exist and instead trying to fit everybody into one particular standard.

I think it’s important that the businesses have someone who is essentially in charge over managing these types of facial recognition adoptions and that person should be thoughtful about some of the ethical questions, some of the bias, and fairness questions that are associated with it, because technology is getting more powerful and the claims are getting larger, that we really stop to think about what are some of the adverse outcomes. Also, transparency, to the point that you just made, when people have when businesses put this type of technology in place, video surveillance, customers don’t necessarily know that’s what’s happening. Job candidates don’t necessarily know that they’re being analyzed. I think some level of transparency is important as well both from a business perspective that you should let your customers know what you’re doing, but also if there is going to be bias in the system then we really need to know what’s being used, to be vigilant, and to make sure that we identify those types of biases and do the best we can to mitigate it. 

Jonathan Cook: 

As Lauren Rhue points out, criticisms about the scientific invalidity of Emotion AI and the shoddy psychological theories upon which it is based are not intellectual abstractions. The application of Emotion AI is already warping human society. Right now, people and projects in business are being evaluated using Emotion AI technology, and much of the time, we don’t even know about it.

Digital cameras are all over the place, often scanning people’s faces without their permission or knowledge. Consider that new iPhones, and many other new smartphones, come with digital cameras and facial recognition and analysis software automatically installed. There are household appliances with such devices too, and when we walk through a public space, we run a gauntlet of digital cameras almost everywhere we go. Not all of these are connected to systems running Emotion AI software, but many of them are.

Our emotions ought to be the citadel of our privacy. When we walk out into public spaces, we reasonably expect that people around us might watch our bodies move, but it is another thing entirely for businesses and law enforcement agencies to use software packages that attempt to read our innermost feelings without our permission. 

It’s a fortunate thing, in a way, that Emotion AI technologies don’t really do what they say they can do. The mere idea that people’s emotions could be leveraged against them, at scale, by automated emotion detection systems is troubling enough. What’s at stake is illustrated by Amazon’s Emotion AI system, a software package called Rekognition.

Last week, Amazon announced that Rekognition has learned to automatically identify when people are frightened. Why would any corporation seek to construct automated systems capable of detecting when people are afraid? An apologist for Amazon’s Rekognition might argue that after detecting fear, an artificial intelligence system might be able to automatically institute actions to soothe the fear, to make people feel better. Aside from questions about the ability of AI to provide genuine soothing, it’s obvious that less benevolent uses for fear-detection technologies can quickly be found, in business negotiations, by police interrogators, or by military and government espionage organizations that have used torture in the past. Imagine what a criminal organization could do with fear detection artificial intelligence systems.

It’s naive to suppose that grave abuses of Amazon’s fear detection Emotion AI software won’t take place, if the company ever gets the system to actually work. At present, however, the most serious abuse being committed by commercial Emotion AI schemes is that of dishonesty.

We can take some comfort in the fact that the very same day that the story about Rekognition’s fear detection capabilities was released, another story came out, reporting that Rekognition is profoundly ineffective at the most basic facial recognition task there is: Telling the difference between one face and another. In tests of the software by the American Civil Liberties Union, one out of five state legislators in California were identified by Amazon’s artificial intelligence software as criminal suspects who are wanted for arrest. If Amazon’s Emotion AI is incapable of telling the difference between a state legislator and a criminal fugitive, why should we believe that the software can reliably identify the much more subtle visual signals of emotion?

What’s more, a month ago, the Orlando Police Department canceled its contract to use the Rekognition facial scanning and analysis software. The Department found that Rekognition could not perform the basic tasks that Amazon had promised it would. Even getting Rekognition set up and running was next to impossible, as the software was filled with basic technical glitches that Amazon had not worked out. After 15 months of adjustments by Amazon technicians, the system had failed to perform even one successful facial recognition test.

Over and over again, boasts about the abilities of Emotion AI software fail to materialize when they are put to the test. The supposed benefit of Emotion AI systems is that they are able to tame the unpredictability of emotion through technical and scientific rigor. The reality of these technologies, however, has more in common with carnival gimmickry than true science.

Yuliya Grinberg reminds us that, as much as digital technology corporations like to depict themselves as grounded in cool, rational analysis, their technicians and executives are as emotional as anyone else. Digital technologies represent an emotional agenda, though its creators work hard to conceal that fact.

Yuliya Grinberg:

The larger reality is that these, in being presented as scientifically sound, data-driven, even that expression, data-driven, has so much solidity and weight to it, it has the whole kind of scientific infrastructure, the weight of a kind of scientific establishment behind it that feels more trustworthy than your regular old personal sentiments.

We began this conversation with a question of emotion and the way in which emotion can be monitored. And as an open question: Can emotions even be monitored? Another way to think about emotion and data monitoring is how artists and technologists even conceptualize it, when the device is presented as a rational objective scientific tool, as a cool, impersonal technology that can cleanly narrate and also observe one’s experience and one’s physiological response.

What if we think about technologies as by default never so rationale and cool and objective, as always already to certain degree emotional, irrational, some might say volatile, unpredictable? That’s not a way in which we are accustomed to discussing scientific enterprise or technological innovation in our country, but that’s also a component of how this field is, and it is unpredictable. It is shaped by emotional kind of investments into renewal of passions. Devices often break down, so when that happens, that sentiment, it’s not an accidental temporary moment before it again becomes a rational tool.

Jonathan Cook:

The emotional character of the entire project of emotion-detecting artificial intelligence software is revealed by the over-the-top claims made by the executives promoting these services. Once, Rana el Kaliouby, the founder of Emotion AI company Affectiva, bragged that “in 3 to 5 years, we will forget what it was like when our devices didn’t understand emotion”. That statement was made three years ago, and there still isn’t a single digital device that can reliably measure emotion, much less understand it.

We shouldn’t be surprised by the repeated failures of the Emotion AI industry. We humans, the ones who actually feel emotions, know that they are not objects that can measured with numbers. They’re mysterious subjective experiences that we hold on the inside. When we allow manifestations of our emotions to be expressed externally, we know that the emotion itself is something else. 

A smile is not happiness. A frown is not sadness. What’s more, a smile isn’t even a reliable indicator that a person is feeling happy. Sometimes, people smile when they’re angry. Most of the time, when people are sad, they don’t frown or cry.

The psychologist Lisa Feldman Barrett has worked with her colleagues to compile a small mountain of research that debunks the theory of basic emotion upon which the entire Emotion AI industry is founded. Emotion, she explains, isn’t a simple instinctual thing with a simple fingerprint that can be found anywhere in the physical body or in neurological pathways. Instead, individuals construct emotions in remarkably different ways, and express their emotions with astonishing diversity.

What Emotion AI systems attempt to measure, she explains, aren’t genuine emotional expressions, but stereotypes of emotional expressions, the sort of simplistic models we teach to little children when they’re just learning about what emotions are. Little children eventually outgrow these stereotypes. Emotion AI doesn’t.

The theory of basic emotions is built upon stereotypes. The original research upon which the theory was based was conducted with photographs of actors pretending to have emotion, displaying cartoonish facial expressions. This use of people pretending to have emotion continues in Emotion AI research to this day. 

These pretense-based research methods were based on a 20th century belief that emotions can be triggered by pretending to have them. You have probably heard some version of this folk belief, such as the idea that you’ll feel better if you force yourself to smile. This year, a massive meta-analysis of a huge number of studies on the relationship between facial expression and emotion revealed that these beliefs are not scientifically valid. Most emotions, the study found, cannot be triggered externally through the adoption of particular facial expressions. Even in the small number of emotions where such an effect was identified, the impact was extremely small and unreliable. You might feel a bit better if you force yourself to smile, the researchers found, but only a very small amount, and only on rare occasions.

Another scientific review scrutinizing the theory of basic emotions was released this summer. It concluded that the fundamental claim that emotions can be objectively measured through facial scanning is not scientifically supported. First, the researchers found that the claim has limited reliability, meaning that emotions are neither reliably expressed nor interpreted through a set of common facial expressions. Second, the review found that the assertion of a link between facial expressions and emotions lacks specificity. That is, there are not any unique facial expressions that apply only to one particular emotion. Third, the study concluded that the claim has limited generalizability, meaning that the research used to justify the theory of basic emotions has failed to properly account for the role of culture and other varying human contexts.

Over and over again, studies that purport to support the theory of basic emotions have failed to be replicated by independent researchers. The old psychological models used as the foundation of Emotion AI just don’t hold up to scientific scrutiny.

Unfortunately, entire companies are falling for the Emotion AI grift. When I attended a business insights conference this spring, almost every presentation there included some aspect of Emotion AI. Over and over, rooms filled with business professionals accepted without question or protest the idea that Emotion AI systems are reliable.

Can we be honest about why Emotion AI is being used so often in business? It isn’t because the technology is scientifically reliable, because it isn’t. Businesses use Emotion AI services because they promise a quick, cheap, and remote method of researching consumer emotion.

Too many people in business want Emotion AI to be real because they are looking for ways to keep emotion at arm’s length while checking “empathy” off from their list of things to do. They don’t want to have to go through the trouble of actually spending time with the human customers their businesses claim to serve. Instead of engaging with other people’s emotions in their authentic form, it’s easier for them to conduct data analysis on facial scans obtained through people’s smartphones.

Deep down, they don’t care that Emotion AI is a scam, because it enables them to get through their workdays more easily by keeping real emotion at arm’s length, with the false appearance of scientific objectivity.

What could be better than that? Genuine emotional authenticity would be better, actually.

Yes, it takes time to connect with customers, to pay attention to their full emotional experience in context. This time isn’t wasted, however. It’s an investment in doing the job right.

Emotion AI can be done quickly, but it doesn’t do the job right, and so can send a business off in the wrong direction. It’s much better to move slowly in the right direction than to hurry along the wrong way.

It’s bad business to surrender the vital domain of emotion, allowing shoddy pseudoscience to make it just another data-driven domain, but what else can we do? The alternative is to reclaim emotion in business as a human domain.

A good place to start is with emotional granularity. Granularity refers to the degree of precision with which we perceive the world. A coarsely grained perception picks up only a few, large distinctions. Finely grained perception, however, can identify a huge number of small nuances. Low granularity is simple, but clumsy. High granularity allows us to take action with fine, precise dexterity.

Emotional granularity is the ability to perceive distinctions between different emotions. Emotion artificial intelligence systems, such as the ones provided by Affectiva, Face++, and Amazon, have low emotional granularity. They don’t recognize the existence of more than just a few very basic emotions, and can’t even measure those well. People working with Emotion AI have an emotional experience of the world that is equivalent to what a person would go through if they could navigate the world with eyesight capable of perceiving their surroundings in the form of six or eight huge pixels.

Most people have better emotional granularity than that. If you ask them to name all the emotions they can think of off the top of their heads, most people can identify 40, 50, or even 100 emotions, and later, after they’re done with they’ll lists, they’ll come back to you with even more emotions that they didn’t think about at first.

These people have high emotional granularity, the ability to identify the differences between a large number of emotions. Their emotional lives are vivid, built through a rich history of interacting with other human beings.

High emotional granularity is vital in business. Here’s why: Research has shown that people who have high emotional granularity tend to have better mental and physical health than people with low emotional granularity. They also tend to be more successful at performing tasks than people with low emotional granularity. Individuals with high emotional granularity have an easier time getting through life because they possess a large number of conceptual frames with which to evaluate their interactions with other human beings. This translates into a more flexible, nimble mindset in general.

In business, it isn’t enough to simply want your customers to be happy. In order to build a rich emotional relationship with customers, a business needs to go beyond the nursery school feelings of sad, mad, and glad, to identify the precise, finely grained emotional experiences it wants to foster, as well as the specific emotions it wants to avoid.

Psychologist Tim Lomas describes the value of high emotional granularity when he advises that, “A word that signifies an unfamiliar aspect of life is an invitation to inquire into the phenomenon it specifies and to explore the possibility of bringing it into one’s life.” Finding and exploring possibilities is what human business is all about.

The historian of emotion Tiffany Watt Smith put it well when she wrote, “What we need isn’t fewer word for our feelings. We need more.” The more emotions we can identify, the more nuanced and successful our work will become. For that reason, I’ve begun a project of emotional granularity, describing emotions one by one, every day, at EmotionalGranularity.com. I’ll be integrating that project into this podcast, but I’ll talk more about that later in the season.

Building up a high emotional granularity sets the stage, but it isn’t enough. To make a human approach to understanding emotional factors in business complete, we need a method for researching emotion in true depth. We need Emotional Immersion.

You might have heard of the 36 Questions That Lead To Love project. Psychologists Arthur Aron and Elaine Spaulding conducted an experiment in which they discovered that when people sit down, human to human, looking each other in the eye, asking each other a series of questions with a gradually increasing degree of intimacy, the subjects grow more fond of each other. Sometimes, they even fall in love.

The questioning was open-ended, and face to face. This approach established an experience of mutual presence in which two human beings demonstrated that they were truly capable of caring about each other. You couldn’t replicate this project with a chatbot, or with a digital camera silently scanning a person’s face to detect the precise angle of the wrinkles on their foreheads. It takes a human being to establish human intimacy.

That’s the approach taken through the Emotional Immersion process. It’s not a survey, or a focus group, or one of those supposedly “in-depth” interviews in which the researcher doggedly pursues a line of questioning designed to get directly to the research objectives. It’s an open-ended process, in which there’s trust in the idea that emotional intensity will lead to the most powerful insights.

The goal of Emotional Immersion is not to attain scientific validity, but to represent people’s subjective perspectives in all their poetic richness, enabling businesses to integrate their products and services into authentic experiences as lush as the emotions they speak to.

As a non-scientific process, Emotional Immersion seeks out error, rather than trying to eliminate it. The method uses a relaxed interviewing process that guides research participants through a visualized recreation of relevant past experiences. 

People have an easier time remembering emotionally salient moments from their past, but they do not keep these memories in a stable, literally-accurate form, as if they were photographs. Instead, memories of emotional experiences are reshaped over time to conform with people’s frameworks of conceptual meaning. So when Emotional Immersion researchers analyze people’s visual memories of past events, they do so with the goal of understanding the deeper emotional significance of people’s mistaken perceptions, remembering that these perceptions are more powerful drivers of behavior than any facts an objective study might collect. The error is the insight.

Emotional Immersion has a slow and respectful pace. Given the extra time allowed for in the method, researchers are able to proceed with a pattern of cyclical probing that always accepts people’s reality from their own perspective, laddering their language to find connections between apparently mundane objects and the profound importance they take on when they are given proper attention. 

The process works with the language that’s authentic to the experience, rather than seeking to impose any supposedly universal terms. So, even if research participants cannot yet articulate any single words that are well matched to the commercial context being studied, Emotional Immersion helps them articulate the poetry inherent in the experience. 

In this way, a vocabulary of the subjective dimensions of the commercial experience can be developed in detail, allowing the articulation of new levels of emotional granularity. This emotional granularity can guide the development of business strategies that respect the nuances of people’s choices instead of reducing their complexity to a few crude elements.

In short, Emotional Immersion is a human research approach that accepts people’s emotions on their own terms, working sensitively to articulate them more fully, to find the specific emotional contexts in which businesses and their customers can find mutually fulfilling common ground.

The stereotype of people in business is that they’re stiff and unfeeling, but it doesn’t have to be that way. Instead of following the digital herd that only sees humanity through the strange filter of data analytics dashboards, it’s possible for people in business to reach out and get to really know their customers intimately. With that kind of insight, they can do truly wonderful things.

Now, how can we properly follow a podcast episode about emotion in business? Later on in this season, This Human Business will go further into the subjective realms of enterprise, exploring the rhythms of myth and poetry in the world of commerce. That’s going to be a great trip, but first, like Dante led on by his beloved Beatrice, we need to visit some darker territory.

When you’re dealing with emotion, it’s important to practice honesty, even when it’s difficult. Next week’s episode will confront the issue of racism in business, because racism is the opposite of emotional authenticity. The racist impulse comes from the same place as Emotion AI, supposing that instead of discovering unpredictable, surprising human beings, we can conclude that people are nothing more than simple types determined by crude biology.

It’s not a pretty picture, but be brave. Tune in next Monday. In the meantime, you can find more episodes of this podcast, along with transcripts, at ThisHumanBusiness.com

Thank you for listening to This Human Business, and thank you to Meydan, for the music that you’re listening to now. The name of the song is Underwater, and it’s from the album For Creators.