It’s a riot: the stressful AI simulation built to understand your emotions – The Guardian (blog)

Posted: March 29, 2017 at 11:23 am

A protester hurls a tear gas canister fired by police in Ferguson, Missouri, on 13 August 2014. Photograph: AP

An immersive film project is attempting to understand how people react in stressful situations by using artificial intelligence (AI), film and gaming technologies to place participants inside a simulated riot and then detecting their emotions in real time.

Called Riot, the project is the result of a collaboration between award winning multidisciplinary immersive filmmaker Karen Palmer and Professor Hongying Meng from Brunel University. The two have worked together previously on Syncself2, a dynamic interactive video installation.

Riot was inspired by global unrest, and was specifically inspired by Palmers experience of watching live footage of the Ferguson protests in 2015. I felt a big sense of frustration, anger and helplessness. I needed to create a piece of work that would encourage dialogue around these types of social issues. Riots all over the world now seem to be [the] last form of [community] expression, she said.

Whereas Syncself2 used an EEG headset to place the user in the action, with Riot Palmer wanted to try and achieve a more seamless interface. Hongying and I discussed AI and facial recognition; the tech came from creating an experience which simulated a riot it needed to be as though you were there.

Designed as an immersive social digital experience, the objective is to get through a simulated riot alive. This is achieved through interacting with a variety of characters who can help you reach home. The video narrative is controlled by the emotional state of the user, which is monitored through AI software in real time.

Machine learning is the key technology for emotion detection systems. From the dataset collected from audiences, AI methods are used to learn from the data and build the computational model which can be integrated into the interactive film system and detect the emotions in real-time, explained Meng.

The programme in development at Brunel can read seven emotions, but not all are appropriate for the experience created by the Riot team. Currently,Riots pilot interface can recognise three emotional states: fear, anger and calm.

I tried it along with Dr Erinma Ochu, a lecturer in science communication and future media at the University of Salford, whose PhD was in applied neuroscience.

Riot is played out on a large screen, with 3D audio sound surrounding us as a camera watches our facial expressions and computes in real time how we are reacting. Based on this feedback, the algorithm determines how the story unfolds.

We see looters, anarchists and police playing their parts and interacting directly with us . What happens next is up to us: our reactions and responses determine the story, and as the screen is not enclosed in a headset, but open for others to see, it also creates a public narrative.

Ochu reacted with jumps and gasps to what was happening around her and ultimately didnt make it home. Its interesting to try something you wouldnt do in real life so you can explore a part of your character that you might suppress if you were going to get arrested, she said.

As a scientist and storyteller she felt Riot was ahead of the curve: This has leapfrogged virtual reality, she said.

According to the Riot team, virtual reality (VR) developers have struggled to create satisfying stories in an environment in which, unlike film, you cant control where the user looks or what route they take through the narrative.

In order to overcome these issues and create a coherent, convincing storyline, the team from Brunel re-trained their software versions of facial recognition technology to work for Riot. [This] provides a perfect platform to show our research and development. Art makes our work easier to understand. We have been doing research in emotion detection from facial expression, voice, body gesture, EEG, etc for many years, said Meng. He hopes the projects success will make people see the benefits of AI, leading to the development of smart homes, building and cities.

For now, the emotion detection tool being worked on at Brunel can be used in clinical settings to measure pain and emotional states such as depression in patients. Similar tech has already been used in a therapeutic setting; a study last year at the University of Oxford used VR to help those with persecutory delusions. Those who trialed real life scenarios combined with cognitive therapy saw significant improvement in their symptoms.

But can Riots current AI facial recognition tech work for everyone? People with Parkinsons, sight or hearing issues might need an EEG headset and other physical monitors to gain the same immersive experience unless tech development rapidly catches up with Palmers ultimate vision of a 360 degree screen, which would also allow a group of participants to play together.

Perhaps Riot and its tech could herald a new empathetic, responsible and responsive future for storytelling and gaming in which the viewer or player is encouraged to bring about change both in the narrative and in themselves. After all, if you could truly see a story from the another persons point of view what might you learn about them and yourself? How might you carry those insights into the real world to make a difference?

The V&A will be exhibiting Riot as part of the Digital Design Weekend September 2017. The project is currently shortlisted for the Sundance New Frontier Storytelling Lab.

Read the rest here:

It's a riot: the stressful AI simulation built to understand your emotions - The Guardian (blog)

Related Posts