A Study on the Classification Performance of Machine Learning-Based Zentner Music Emotion Model 


Vol. 15,  No. 1, pp. 46-53, Jan.  2026
10.3745/TKIPS.2026.15.1.46


PDF
  Abstract

This study explores music emotion classification using machine learning. The emotional categories are based on Zentner's music emotion model, and labels from this model were assigned to audio data from each genre in the GTZAN dataset. To perform classification, audio features such as MFCC, ZCR, Chroma, Spectral Centroid, and Harmony were extracted, and machine learning models including KNN, Decision Trees, Random Forest, XGBoost, LightGBM, and SVM were used to compare classification performance. In addition, the effectiveness of Recursive Feature Elimination and stacking techniques was tested, and improvements in classification performance were observed through stacking. The experimental results showed that among individual models, LightGBM achieved the highest classification performance. Moreover, applying the stacking technique led to a 14% improvement in performance on average compared to individual models.

  Statistics


  Cite this article

[IEEE Style]

C. Moonsik and M. Nammee, "A Study on the Classification Performance of Machine Learning-Based Zentner Music Emotion Model," The Transactions of the Korea Information Processing Society, vol. 15, no. 1, pp. 46-53, 2026. DOI: 10.3745/TKIPS.2026.15.1.46.

[ACM Style]

Chung Moonsik and Moon Nammee. 2026. A Study on the Classification Performance of Machine Learning-Based Zentner Music Emotion Model. The Transactions of the Korea Information Processing Society, 15, 1, (2026), 46-53. DOI: 10.3745/TKIPS.2026.15.1.46.