Header menu link for other important links
X

Multimodal Techniques for Emotion Recognition

Agarwal, Devangi,
Published in IEEE
2021
Abstract

Human behaviour and actions are greatly affected by their emotions. Through human computer interactions (HCI) interpreting of emotions has become easier. Modals like Facial Emotion Recognition(FER) that considers the facial features of the human, Speech Emotion Recognition (SER) that concentrates on the texture of human speech, Electroencephalography (EEG) that deals with brain waves and Electroencephalogram(ECG) that focuses on one’s heart rate are few of the widely used unimodels that are in place for recognizing emotions. In this paper we see how multimodal system tends to provide higher accurate results than the unimodels in existence. In order to implement the multimodal system two fusion methods were considered that are Feature Level Fusion and Decision Level Fusion. It was observed that Feature Level Fusion was preferred by most researchers due to its capability of providing more valid results in case of compatible features. Facial-Speech, Speech-ECG and Speech-Facial are few of the well liked multimodals that have been implemented by varied researchers. Out of these Facial-EEG provided most robust and efficient outputs.

About the journal
JournalData powered by Typeset2021 International Conference on Computational Intelligence and Computing Applications (ICCICA)
PublisherData powered by TypesetIEEE
Open AccessNo