What to expect from your emotion recognition software Part 2

In part one of our series, we compared sentiment analysis to emotion recognition in order to show the differences between the two approaches. We’re back to do another comparison and talk a little about the methodologies and how they differ.

First, a quick refresher: Sentiment analysis categorizes and analyzes text to determine if the writer’s attitude towards a particular person, place or thing is either “positive,” “negative,” “neutral,” or “mixed.” Emotion recognition (or emotion detection) is the process of identifying and attributing emotional states based on communication, such as Anger or Sadness.

Sentiment analysis measures polarity: Good/Bad. Emotion recognition measures emotions.

 

Is emotion recognition always better?

 

We’d love to say that it is, but, sadly that’s not the case.

Emotion recognition isn’t always applied ethically. Facial recognition software, for example, has been shown to be problematic with bias and has led to a retraction of many projects and products. This kind of emotion recognition is easily faked, and some of the fundamental scientific assumptions are problematic at best.

Detecting emotions from aural tonal analysis seems promising, but it too is problematic. Emotional Prosody only works in the normative ranges, has considerable problems with cultural nuances, and in the end is only analyzing modulation. Let’s face it: You can’t do gush to your friends about your new object of affection using grunts and groans. (Though you would look pretty silly trying).

Unfortunately, some operators are relying on sentiment analysis at the core of their “emotion” recognition and these hybrid models only exacerbate the problem. As we’ve discussed in our other series (Part 1, and Part 2), science has not agreed on a consensus model. Some that claim to have found multiples of emotions are likely measuring the same variables, confounding their results. Or, they’ve created a ML model from a specific data set that is likely to prove biased.

In these cases, emotion recognition isn’t better. It’s often confusing to an operator. When psychology can’t agree–some models have 5, 7, 27, 13 or so; or up to 40 different “distinct” emotions—you are not measuring the emotions that are present. You have a mess.

Some models are clearer

 

VERN is based on neuroscientific conceptualizations of the mind and does not rely on psychological studies that may or may not be replicable (and therefore not valid). We recognize emotions as euphoric, dysphoric, and fear. Theoretically, all emotions would be a combination of the three and so far in our model that’s what we’re seeing. We don’t release a detector unless it’s statistically significant as it relates to human-coder agreement.

When you’re dealing with peoples’ emotions, you need it to be as reliable as possible.

Word based analysis has proven to be the most reliable, as it is the labels we put on concepts and how we choose to codify the world around us. And then share it, with languages. Languages that require a common understanding. It does however, require special software that can extract latent clues from text.

And there are some instances where the sentiment scores will match emotion recognition. Many of these sentiment analysis tools are sophisticated enough to get most of the polarity in the sentence. Which, if sentiment analysis and emotion recognition are accurate, should loosely agree.

 

“I’m sorry and ‘I apologize’ mean the same thing… except when you’re at a funeral.”

 

Some of the leading sentiment analysis rated that sentence as “negative,” which would be accurate. One of them rates it as moderately negative (-1, 0.5). They agree. VERN rates that as 97% Humor, 80% Anger, and 66% Sadness, showing that it’s probably a joke, that is both angry and sad.

It’s clear that the author was communicating “dark” humor, and admittedly it’s a little risqué. But what a great way to illustrate that there’s more to life than just “positive,” “negative,” “neutral,” or “mixed.”

Emotion recognition can add a lot to your analysis. It can tell you that there is an emotion present, which ones, how confident you should be about it, and how strong is the intensity of the emotion. So even when the methodologies arrive at a similar conclusion, it’s not the same comparison.

 

It’s never an apple-to-apple comparison

 

It’s more apple-to-orange. Both are fruits, but that’s where the similarity lies. Emotion recognition can provide you with deeper insight into communications than sentiment analysis can. And, often times sentiment analysis doesn’t agree or is just plain wrong.

“Yep I’m sad and now I wanna eat all of my feelings thanks.”

 

Two leading sentiment analysis tools rank that sentence as negative, and very positive (1, 0.7). VERN analyzed the sentence and found that there was 51% Sadness, 51% Anger, and 44% Humor. To you and I, that would make sense. There is definitely a tone of anger and certainly sadness, though it’s not a strong declaration of both.

That’s a clear difference. When you’re talking about helping other human beings, such as in a healthcare setting, getting an objective and salient analysis is critical to assisting in making a diagnosis.

Let’s check out another example.

“People keep telling me life goes on, but that’s the saddest part.”

 

Two leading sentiment analysis tools rank the above sentence as neutral, and a moderately strong positive. (1, 0.6). There’s a clear disconnect between the sentiment tool, and the emotions that are actually present. Because use of language often means duplicity and multiplicity in the messaging, it’s common to get these results. Some other software ranks this sentence as “mixed;” and while they weren’t wrong. They just weren’t right…

VERN ranked that sentence as 51% Sadness, 12.5% Anger, and 12.5% Humor. Again, this is a sad statement and emotion recognition would likely pick up on the emotives-the latent clues we use to communicate.

It’s so sad it’s funny

 

Sentiment analysis tools also have a problem identifying humor. While not a traditional emotion, the phenomena of humor spans multiple emotions and is an extremely powerful method of communication. It’s one of the few that results in mostly demonstrable physiological responses. It’s also one of the few communication phenomena that is universally practiced.

So how can emotion recognition deal with humor? VERN’s model is different and looks at humor as the detection of a benign incongruity. The sharing of this incongruity is the Sender’s attempt at humor, and we’ve created a detection model that measures over 20 signals that are statistically significant indicators of humor.

“I hate it when I go to hug someone really sexy and my face smashes right into the mirror.”

 

VERN scored that as 51% confidence of Humor. One of the leading sentiment analysis tools scored that as very negative (-1, 0.7). We can probably agree that the sentence was an attempt at humor, but I’m not sure we’d all agree that it’s negative. (It’s not something I have a problem with, for example).

“My dad died when we couldn’t remember his blood type. As he died, he kept insisting for us to “be positive,” but it’s hard without him.”

 

VERN ranks that at 90% confidence of Humor, 33% Anger, and 51% Sadness. The sentiment analysis tools we compared ranked it as very negative. With an emotion recognition tool like VERN, you can find emotional clues even in humor-something that sentiment analysis can’t do and very few emotion recognition tools do at all.

As we can see, using emotion recognition software like VERN can help you find insights into communication that you might not otherwise find. It’s a look into the emotional state of the sender. It’s something that sentiment analysis struggles with, but emotion recognition products can help with.

If you’d like to get started with VERN, head over to register for a free 30 day, 10,000 query free trial or check out our other options.