Search for a command to run...
Abstract—Music composition has traditionally been an endeavour requiring significant human expertise, creativity, and an in-depth understanding of musical theory. The rapid advancement of deep learning methodologies has opened transformative possibilities for automating creative tasks, including music generation. This paper presents MelodAI, an artificial intelligence-driven system designed to generate original musical compositions by learning temporal patterns from existing MIDI-based musical datasets. The proposed system employs Long Short-Term Memory (LSTM) networks, a specialized class of Recurrent Neural Networks (RNN) capable of modelling long-range sequential dependencies, to capture and reproduce harmonic and melodic structures. MelodAI processes encoded MIDI data, trains a multi-layered LSTM model to learn note sequences and chord progressions, and generates novel compositions that are musically coherent and contextually relevant. The system achieves a training accuracy of approximately 94.7% and demonstrates strong qualitative performance through human evaluation studies. Experimental results indicate that the proposed architecture consistently outperforms baseline Hidden Markov Model and basic RNN approaches in generating structured and melodically appealing music. The system is capable of real-time generation and outputs standard MIDI files, making it immediately useful for entertainment, therapy, gaming, and education. Index Terms—Music Generation, Long Short-Term Memory (LSTM), Deep Learning, MIDI Processing, Recurrent Neural Networks, Sequence Prediction, Music21, Artificial Intelligence.
Published in: INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT
Volume 10, Issue 04, pp. 1-9
DOI: 10.55041/ijsrem58837