Frontiers in Neuroscience | |
A novel feature fusion network for multimodal emotion recognition from EEG and eye movement signals | |
Neuroscience | |
Baole Fu1  Chunrui Gu1  Yuxiao Xia1  Ming Fu1  Yinhua Liu2  | |
[1] School of Automation, Qingdao University, Qingdao, China;Institute for Future, Qingdao University, Qingdao, China;School of Automation, Qingdao University, Qingdao, China;Institute for Future, Qingdao University, Qingdao, China;Shandong Key Laboratory of Industrial Control Technology, Qingdao, China; | |
关键词: multimodal emotion recognition; electroencephalogram (EEG); eye movement; feature fusion; multi-scale; Convolutional Neural Networks (CNN); | |
DOI : 10.3389/fnins.2023.1234162 | |
received in 2023-06-03, accepted in 2023-07-20, 发布年份 2023 | |
来源: Frontiers | |
【 摘 要 】
Emotion recognition is a challenging task, and the use of multimodal fusion methods for emotion recognition has become a trend. Fusion vectors can provide a more comprehensive representation of changes in the subject's emotional state, leading to more accurate emotion recognition results. Different fusion inputs or feature fusion methods have varying effects on the final fusion outcome. In this paper, we propose a novel Multimodal Feature Fusion Neural Network model (MFFNN) that effectively extracts complementary information from eye movement signals and performs feature fusion with EEG signals. We construct a dual-branch feature extraction module to extract features from both modalities while ensuring temporal alignment. A multi-scale feature fusion module is introduced, which utilizes cross-channel soft attention to adaptively select information from different spatial scales, enabling the acquisition of features at different spatial scales for effective fusion. We conduct experiments on the publicly available SEED-IV dataset, and our model achieves an accuracy of 87.32% in recognizing four emotions (happiness, sadness, fear, and neutrality). The results demonstrate that the proposed model can better explore complementary information from EEG and eye movement signals, thereby improving accuracy, and stability in emotion recognition.
【 授权许可】
Unknown
Copyright © 2023 Fu, Gu, Fu, Xia and Liu.
【 预 览 】
Files | Size | Format | View |
---|---|---|---|
RO202310127874481ZK.pdf | 1971KB | download |