Multimodal Emotion Recognition with Transfer Learning of Deep Neural Network

Release Date:2022-12-30 Author:HUANG Jian, LI Ya, TAO Jianhua, and YI Jiangyan Click:

[Abstract] Due to the lack of large⁃scale emotion databases, it is hard to obtain comparable improvement in multimodal emotion recognition of the deep neural network by deep learning, which has made great progress in other areas. We use transfer learning to improve its performance with pre⁃trained models on large⁃scale data. Audio is encoded using deep speech recognition networks with 500 hours’ speech and video is encoded using convolutional neural networks with over 110,000 images. The extracted audio and visual features are fed into Long Short⁃Term Memory to train models respectively. Logistic regression and ensemble method are performed in decision level fusion. The experiment results indicate that 1) audio features extracted from deep speech recognition networks achieve better performance than handcrafted audio features; 2) the visual emotion recognition obtains better performance than audio emotion recognition; 3) the ensemble method gets better performance than logistic regression and prior knowledge from micro⁃F1 value further improves the performance and robustness, achieving accuracy of 67.00% for “happy”, 54.90% for “angry”, and 51.69% for “sad”.

[Keywords] deep neutral network; ensemble method; multimodal emotion recognition; transfer learning

Download: PDF