530
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Emotion recognition based on convolutional gated recurrent units with attention

, , , , , , & show all
Article: 2289833 | Received 22 May 2023, Accepted 27 Nov 2023, Published online: 09 Dec 2023

Figures & data

Figure 1. An outline of the EEG-based emotion recognition framework presented with AB-DPCGRU.

Figure 1. An outline of the EEG-based emotion recognition framework presented with AB-DPCGRU.

Figure 2. Mapping from real 62 electrode channel locations to 2D maps.

Figure 2. Mapping from real 62 electrode channel locations to 2D maps.

Figure 3. Attention module. Where U^sSE extracts channel attention and U^cSE extracts spectral attention.

Figure 3. Attention module. Where U^sSE extracts channel attention and U^cSE extracts spectral attention.

Figure 4. DO-Conv process. The calculation method of figure (a) is called feature composition, and the calculation method of the figure (b) is called kernel composition.

Figure 4. DO-Conv process. The calculation method of figure (a) is called feature composition, and the calculation method of the figure (b) is called kernel composition.

Figure 5. Over-parameterised convolution module. The structure of this module for frequency and spatial feature learning.

Figure 5. Over-parameterised convolution module. The structure of this module for frequency and spatial feature learning.

Figure 6. The layout of GRU module for temporal feature acquisition.

Figure 6. The layout of GRU module for temporal feature acquisition.

Figure 7. Internal structure of GRU. The update gate and reset gate are incorporated.

Figure 7. Internal structure of GRU. The update gate and reset gate are incorporated.

Figure 8. The protocol used in the emotion experiment.

Figure 8. The protocol used in the emotion experiment.

Table 1. Network parameter selection.

Figure 9. The confusion graph of the SEED dataset.

Figure 9. The confusion graph of the SEED dataset.

Figure 10. Five-fold cross-validation accuracy.

Figure 10. Five-fold cross-validation accuracy.

Table 2. Ablation studies of each module.

Table 3. Performance comparison of the models on the SEED dataset.

Table 4. Performance comparison of the models on the SEED-IV dataset.