Umut Avci2025-10-0620252169-353610.1109/ACCESS.2025.3578143http://dx.doi.org/10.1109/ACCESS.2025.3578143https://gcris.yasar.edu.tr/handle/123456789/6068The limited availability of labeled emotional speech data remains a significant challenge in the development of robust speech emotion recognition systems. This paper presents a comprehensive investigation of the effectiveness of diverse data augmentation strategies for enhancing emotion recognition performance. Three different data augmentation categories were examined: audio-based transformations image-based modifications and feature-level synthesis. Seventeen transformations were used in audio-based data augmentation to change the time and frequency content of the raw audio signal. Eight transformations such as shifting rotating and zooming were applied to the spectrogram images for image-based data augmentation. The SpecAugment method was also used to transform the spectrograms into versions with masked time and frequency axes. In feature-space-based approaches new feature vectors were generated using five oversampling algorithms and a generative adversarial network. Experimental results from the EMO-DB and IEMOCAP datasets demonstrate that the data augmentation approaches enhance emotion classification performance by up to six percent. Empirical evidence indicates that training sets augmented through combinations of audio-based transformations yield the highest performance gains. In contrast the GAN-based approach fails to improve the classification performance.EnglishData augmentation, Data augmentation, speech emotion recognition, speech emotion recognition, supervised learning, supervised learningMODELA Comprehensive Analysis of Data Augmentation Methods for Speech Emotion RecognitionArticle