, 44 tweets, 9 min read Read on Twitter
“Can Machines Perceive Emotion?” Welp half the room has staked their career on it and the other half says “probably not, definitely not yet” so looking forward to a 🌶spicy🌶 keynote by @LFeldmanBarrett #acii2019

You best believe ima live tweet this bad Larry
👇🏻
Industry is falling over themselves trying to read emotions from facial & body signals. Many companies already make this claim! Assumption: emotions can be objectively read from body signals.
“Any claims that we are reading emotion successfully are, at this point, exaggerating what we can do. Any companies we read about are misrepresenting what we can do.... affective computing may be going about this in the wrong way by misunderstanding what emotions are.”
Detecting happiness is not the same as detecting smiles. Assumption is smiles == happiness.

This is the biggest thing for me. As @saund_katie says: laughter isn’t joy, it’s excess of any emotion.
People *rarely* perform the “canonical” facial (AU)/bodily response for an emotion when reporting genuinely experiencing that emotion.

Ex. People only scowl 30% of the time when they’re angry. 70% of the time they’re doing something else when they’re angry.
But people (and computers) *recognize* the canonical expression — hence the disconnect between genuine emotion recognition and surface level behavior recognition.
Asking people to choose between labels gets pretty high “accuracy”... but free labeling leads to plummeting accuracy. Often no better than chance.

Adding cultural diversity just worsens the matter.

TLDR: “evidence for universal emotional expressions vanishes.”
Sidebar: the sheer amount of data gives LFB a pretty friggin airtight case in this matter. As a young researcher, I see very few people still subscribing to universal facial expressions, and universal behaviors for emotions. At a minimum, people get cultural differences.
WHAT A STUDY they made up words for emotional experiences from other cultures and asked subjects to match sounds to stories and HERE ARE THE CATEGORIES I love it
“Scientists have been using a method which does not collect evidence, it manufactures evidence”

🔥🔥🔥🔥🔥🔥🔥🔥
“These stats may be sufficient for a publication, but not sufficient that anybody in this room would want their outcomes determined by them.”

“We often mistake statistical significance for actual significance for real life.”

GIRL PREACH.
“Variation is the norm.”

This is what pleasure looks like:
People move their faces in different ways in the same emotion category, and in the same ways in different emotion categories.

It’s not that there’s NO correlation, it’s that FACE IS NOT ENOUGH.

face is not universal truth.
Furthermore, our canonical “faces” are western stereotypes.

“Hundreds of papers are getting published in our very best journals based on stereotypes. Basically, it’s a study of emojis.”

🤯🤬😢
In the real world, emotions are not entities that have single bio-markers.

Emotions have vastly diverse neuro& behavioral signals.

Punchline: “variety is the norm.”

Fxnl dynamics, lesion studies, animal studies, etc. emotion categories are always degenerate.
Many emotional theories permit variability, but are they making them up after the fact? Yeah, kinda. Normally variability *around a single biomarker.*

Sidebar: but isn’t that just good science? Revise your theory when you get data that’s surprising?

Here’s a pretty graph:
To be clear: the variability in expression is NOT random... it’s just not strictly predictive of a single emotion.
She made a pun about “CATagories” brb being deeply proud of my dad joke prowess.
Prototype of categories people call to mind are highly context dependent. I.e. “cats.”
People create functional categories. By consensus, we agree on categories. By agreeing, we socially construct categories.

(Like things you can’t bring on airplanes)
EMOTIONAL CATEGORIES ARE THESE CATEGORIES. these emotion categories are socially-imposed. Learning about these categories are not a classification problem, but a construction problem.

We impose meaning on certain signals, because we’ve learned them. This is how we go scowl—>mad
For a machine to perceive emotions in humans, it has to construct these categories on the fly.

It’s a problem of understanding the category a behavior belongs to in a given instance in a given environment.
Signals only have meanings because we impose meaning on them.

Emotions are not built in to your brain and body, but build BY your brain and body, dynamically, as you need them.

It feels like we’re reading, when in fact we’re constructing.
Previous life knowledge helps create categories, when in fact people may be in a state of “experiential blindness.”
Thought exp:
Your brain is stuck in a box, figuring out the outside world only through senses. Sensory changes are only effects of world, and your brain just guesses the causes.

Classic reverse instance, amirite?

Your brain remembers experiences that are similar to present.
Hypothesis: brain constructing ad hoc categories as potential explanations for incoming sense data.

These categories are potential explanations for all these wacky effects we feel.
Brains are really good at generating new ideas from bits and pieces of past knowledge.

Sidebar: Code Names is a great board game that is a real life demo of this. @LFeldmanBarrett have you played?? Want to play after this talk?? It’s a cognitive philosophers dream.
Your brain is basically mostly talking to itself. Predicting what’s going to happen next (by constructing ad hoc categories) such that your brain begins to construct what it expects to experience BEFORE sensory inputs arrive.

Like being nervous before a talk? 😇
Learning is just adjusting your predictive model of the environment to be *really good.*

This works for emotions too.

“Emotions that seem to happen TO you are actually made BY you.”

#zen
SO!!!!! Emotions you detect in other people are partially constructed in your own head!!!! This is how single physical features come to represent single emotions.

BUT physical signals are inherently ambiguous without context!
In order to detect emotions, we have to figure out how machines can construct these emotions with context.

Context includes your own AND others bodies!!
Can machines *experience* emotion?

Who knows, she’s out of time ⌛️

Hands shooting up for Q&A, I love this.
Q: Validity is a problem, but IF we could read emotions, what are the ethics?

A: depends on context. Maybe helpful, maybe prohibitive. “I’ve never considered validity/ethics separately. Given probabilistic accuracy, I don’t believe machines will ever outstrip humans.”
Basically, even humans are just making assumptions about each other’s emotions given huge context. We just have shared consensus, but those are just probabilistic.

Given this: algorithms can be wrong. We don’t want algorithms that can be wrong to be determining outcomes.
“I don’t think it’s likely that emotional agents are going to be a bigger problem than we already have [with humans].”

However given how we regulate each other’s nervous systems, agents *could* be helpful to people by implementing biologically emotionally supportive mechanisms.
Q: if you were to make assumptions/constraints to recognize emotions, what would they be?

A: “I want to answer a different question.” The amount of money we spend on research like this is NOTHING compared to other sciences. If we had huge funding, we could have solved this.
“I would like to collect MUCH more high dimensional data. SIGNIFICANTLY more.” Across lots of time, across tons of dimensions.

People are into data about themselves, maybe we don’t need to make assumptions, we can just actually get the friggin data we need to solve this.
Huge huge huge experiments are totally doable, just like $50mil for a pilot. NOTHING compared to big companies or other sciences (like physics.)

“We’re laboring under huge resource constraints.”
Q: we want to collect lots of data, how could we encourage people to share this data?

A: it’s hard. It’s cumbersome. It’s costly. You need to engage people as participant scientists. Don’t convince, be authentically inclusive. Be respectful to collaborative subjects.
Give people their own data (people love themselves). Explain the problem. Make people understand how they’re making a contribution!!
Q by @RosalindPicard: physicists are self-congratulatory, we criticize each other.

A (discussion): I wanted all senior people to pool resources. Only Adolphs said yes 🙄
(Same Q by Roz): tearing down Eckman is a straw man. We’ve recognized the complexity. Need to use context (including sensationalized media) when giving criticism.

A: “I’ve never taken up anybody’s invitation to criticize Eckman personally.”
“I’m amused by the point I’m tearing down a straw man. There are papers in YOUR community that say they detect emotion when they’re detecting behavior.” EEEEK.

Our own assumptions percolate into all studies.
AAAAAND WERE OOT what a ride. 10/10 highly recommend reading How Emotions Are Made, listening to @LFeldmanBarrett and @kaliouby’s podcast interview I’ll find later, and hanging around #acii2019.

Stay tuned for more hot emotional content that machines can’t hope to understand 🙃
Missing some Tweet in this thread?
You can try to force a refresh.

Like this thread? Get email updates or save it to PDF!

Subscribe to Carolyn Saund
Profile picture

Get real-time email alerts when new unrolls are available from this author!

This content may be removed anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!