Flexible Control in Symbolic Music Generation via Musical Metadata
In this work, we introduce the demonstration of symbolic music generation, focusing on providing short musical motifs that serve as the central theme of the narrative. For the generation, we adopt an autoregressive model which takes musical metadata as inputs and generates 4 bars of multitrack MIDI...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In this work, we introduce the demonstration of symbolic music generation,
focusing on providing short musical motifs that serve as the central theme of
the narrative. For the generation, we adopt an autoregressive model which takes
musical metadata as inputs and generates 4 bars of multitrack MIDI sequences.
During training, we randomly drop tokens from the musical metadata to guarantee
flexible control. It provides users with the freedom to select input types
while maintaining generative performance, enabling greater flexibility in music
composition. We validate the effectiveness of the strategy through experiments
in terms of model capacity, musical fidelity, diversity, and controllability.
Additionally, we scale up the model and compare it with other music generation
model through a subjective test. Our results indicate its superiority in both
control and music quality. We provide a URL link
https://www.youtube.com/watch?v=-0drPrFJdMQ to our demonstration video. |
---|---|
DOI: | 10.48550/arxiv.2409.07467 |