This paper introduces a new approach to emotion classification utilising deep learning models, specifically the Vision Transformer (ViT) model, in the analysis of electroencephalogram (EEG) signals. A dual-feature extraction approach was implemented in our study, utilising Power Spectral Density and Differential Entropy, to analyse the SEED IV dataset. This methodology resulted in the detailed classification of four distinct emotional states. The ViT model, which was originally designed for image processing, has been successfully applied to EEG signal analysis. It demonstrated remarkable performance by attaining a test accuracy of 99.02% with little variance. Notably, it outperformed conventional models like GRUs, LSTMs, and CNNs in this context. The findings of our study indicate that the ViT model has a high level of effectiveness in accurately identifying complex patterns present in EEG data. Specifically, the precision and recall rates achieved by the model surpass 98%, while the F1 score is estimated to be about 98.9%. The results of this study not only demonstrate the efficacy of transformer-based models in analysing cognitive states, but also indicate their considerable potential in improving systems for sympathetic human-computer interaction.
Dettaglio pubblicazione
2023, ACM International Conference Proceeding Series, Pages 238-246
Enhancing Sentiment Analysis on SEED-IV Dataset with Vision Transformers: A Comparative Study (04b Atto di convegno in volume)
Tibermacine I. E., Tibermacine A., Guettala W., Napoli C., Russo S.
Gruppo di ricerca: Artificial Intelligence and Robotics
keywords