Virtual Reality & Intelligent Hardware | |
Self-attention transfer networks for speech emotion recognition | |
Haishuai Wang1  Nicholas Cummins2  Shihuang Sun3  Ziping Zhao4  Zhongtian Bao4  Björn W. Schuller5  Jianhua Tao5  Zixing Zhang6  | |
[1] Department of Biostatistics and Health Informatics, IoPPN, King’s College London, London, UK;Music, Imperial College London, UK;Chair of Embedded Intelligence for Health Care and Wellbeing, University of Augsburg, Germany;College of Computer and Information Engineering, Tianjin Normal University, Tianjin, China;Department of Computer Science and Engineering, Fairfield University, USA;;GLAM -- Group on Language, Audio & | |
关键词: Speech emotion recognition; Attention transfer; Self-attention; Temporal convolutional neural networks (TCNs); | |
DOI : | |
来源: DOAJ |
【 摘 要 】
Background: A crucial element of human–machine interaction, the automatic detection of emotional states from human speech has long been regarded as a challenging task for machine learning models. One vital challenge in speech emotion recognition (SER) is how to learn robust and discriminative representations from speech. Meanwhile, although machine learning methods have been widely applied in SER research, the inadequate amount of available annotated data has become a bottleneck that impedes the extended application of techniques (e.g., deep neural networks). To address this issue, we present a deep learning method that combines knowledge transfer and self-attention for SER tasks. Here, we apply the log-Mel spectrogram with deltas and delta-deltas as input. Moreover, given that emotions are time-dependent, we apply Temporal Convolutional Neural Networks (TCNs) to model the variations in emotions. We further introduce an attention transfer mechanism, which is based on a self-attention algorithm in order to learn long-term dependencies. The Self-Attention Transfer Network (SATN) in our proposed approach, takes advantage of attention autoencoders to learn attention from a source task, and then from speech recognition, followed by transferring this knowledge into SER. Evaluation built on the Interactive Emotional Dyadic Motion Capture (IEMOCAP) demonstrates the effectiveness of the novel model.
【 授权许可】
Unknown