Music Emotion Prediction using Recurrent Neural Networks
Published in arXiv Preprint, 2024
This project investigates the application of recurrent neural networks (RNN, BRNN, LSTM) to predict emotions in music, based on Russell’s Emotion Quadrant.
We curated datasets of 900 clips, augmented them to 3,600 samples, and applied feature extraction via Librosa.
Our experiments demonstrate that while baseline models can be competitive, RNN-based approaches benefit significantly from data augmentation and larger datasets, improving accuracy by up to 30%.
This research highlights the intersection of machine learning, affective computing, and music therapy, with applications in personalized music recommendation systems.
Recommended citation: Chang, X., Zhang, X., **Zhang, H.**, & Ran, Y. (2024). "Music Emotion Prediction using Recurrent Neural Networks." arXiv:2405.06747.
Download Paper | Download Bibtex