Now AI Would Be Able To Recognize Emotions through Body Postures

The way people walk says a lot about their mood. Researchers at the University of Chapel Hill and the University of Maryland have taken it a little further and have developed a machine learning method that is capable of recognizing a person’s emotions from his/her walking style or posture.

According to researchers, the study is one of its kind and was able to get 80.07% correct results in the preliminary experiments.

The coauthors of the report said that emotions play an important role in our daily life. It is not always possible to identify the emotions of a person particularity in games, entertainment, law enforcement, human-computer interaction, shopping, and human-robot interaction.

Four emotions (happy, sad, angry and neutral) are more common moods and comparatively easier to identify from walking style, therefore researchers focused on these specifically.

Different postures and styles of the walk were gathered from videos and then 3D pose estimation technique was used to extract poses and distinguish effective features.

Features were mined from pose sequences through long short-term memory (LSTM)) model. Then it was joined with forest classifier that gives the mean prediction by analyzing a number of decision trees, to set examples for the four emotions.

Shoulder posture, the distance between continuous steps and distance between the hands and neck are considered. Sad and happy emotions were analyzed by the angle of head tilt and to distinguish negative and positive emotions, expansion of body and others overall body postures were noted.


Scientists affiliate arousal with increased movements and the same technique is used for machine learning methods. The magnitude of velocity, acceleration and hand, feet and head movements were analyzed by the model. Emotional Walk and EWalk is data set consists of 1384 postures that were recorded inside and outside the university campus from the videos of 24 subjects.

Amazon Mechanical Turk’s 700 participants were labeled emotions and then these labels were used to analyze the arousal level.

On the state-of-the-art algorithm, the researchers’ team said 13.85% improvement has been noticed in detecting the emotions, whereas, 24.6% improvement in vanilla LSTMs, which do not reflect features to be effective.

This does not mean it will give 100% results, as it is highly dependent on the pose analyzation and perceiving the postures. The team believes this study would elevate the future researches about the emotion recognition algorithms.
"There are some limitations to our approach. The accuracy of our algorithm depends on the accuracy of the 3D human pose estimation and gait extraction algorithms. Therefore, emotion prediction may not be accurate if the estimated 3D human poses or gaits are noisy. Our affective computation requires joint positions from the whole body, but the whole body pose data may not be available in case of occlusions in the video. We assume that the walking motion is natural and does not involve any accessories (e.g., suitcase, mobile phones, etc.). As part of future work, we would like to collect more datasets and address these issues. We will also attempt to extend our methodology to consider more activities such as running, gesturing, etc. Finally, we would like to combine our method with other emotion identification algorithms that use human speech and facial expressions.", explained researchers.


Read next: Can We Teach Artificial Intelligence To Think Ethically? (infographic)
Previous Post Next Post