Discover the stupidity of AI emotion recognition with this little browser game – The Verge

Posted: April 9, 2021 at 2:41 am

Tech companies dont just want to identify you using facial recognition they also want to read your emotions with the help of AI. For many scientists, though, claims about computers ability to understand emotion are fundamentally flawed, and a little in-browser web game built by researchers from the University of Cambridge aims to show why.

Head over to emojify.info, and you can see how your emotions are read by your computer via your webcam. The game will challenge you to produce six different emotions (happiness, sadness, fear, surprise, disgust, and anger), which the AI will attempt to identify. However, youll probably find that the softwares readings are far from accurate, often interpreting even exaggerated expressions as neutral. And even when you do produce a smile that convinces your computer that youre happy, youll know you were faking it.

This is the point of the site, says creator Alexa Hagerty, a researcher at the University of Cambridge Leverhulme Centre for the Future of Intelligence and the Centre for the Study of Existential Risk: to demonstrate that the basic premise underlying much emotion recognition tech, that facial movements are intrinsically linked to changes in feeling, is flawed.

The premise of these technologies is that our faces and inner feelings are correlated in a very predictable way, Hagerty tells The Verge. If I smile, Im happy. If I frown, Im angry. But the APA did this big review of the evidence in 2019, and they found that peoples emotional space cannot be readily inferred from their facial movements. In the game, says Hagerty, you have a chance to move your face rapidly to impersonate six different emotions, but the point is you didnt inwardly feel six different things, one after the other in a row.

A second mini-game on the site drives home this point by asking users to identify the difference between a wink and a blink something machines cannot do. You can close your eyes, and it can be an involuntary action or its a meaningful gesture, says Hagerty.

Despite these problems, emotion recognition technology is rapidly gaining traction, with companies promising that such systems can be used to vet job candidates (giving them an employability score), spot would-be terrorists, or assess whether commercial drivers are sleepy or drowsy. (Amazon is even deploying similar technology in its own vans.)

Of course, human beings also make mistakes when we read emotions on peoples faces, but handing over this job to machines comes with specific disadvantages. For one, machines cant read other social clues like humans can (as with the wink / blink dichotomy). Machines also often make automated decisions that humans cant question and can conduct surveillance at a mass scale without our awareness. Plus, as with facial recognition systems, emotion detection AI is often racially biased, more frequently assessing the faces of Black people as showing negative emotions, for example. All these factors make AI emotion detection much more troubling than humans ability to read others feelings.

The dangers are multiple, says Hagerty. With human miscommunication, we have many options for correcting that. But once youre automating something or the reading is done without your knowledge or extent, those options are gone.

Continued here:

Discover the stupidity of AI emotion recognition with this little browser game - The Verge

Related Posts