M$^{2}$UGen: Multi-modal Music Understanding and Generation with the Power of Large Language Models
The current landscape of research leveraging large language models (LLMs) is experiencing a surge. Many works harness the powerful reasoning capabilities of these models to comprehend various modalities, such as text, speech, images, videos, etc. They also utilize LLMs to understand human intention...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The current landscape of research leveraging large language models (LLMs) is
experiencing a surge. Many works harness the powerful reasoning capabilities of
these models to comprehend various modalities, such as text, speech, images,
videos, etc. They also utilize LLMs to understand human intention and generate
desired outputs like images, videos, and music. However, research that combines
both understanding and generation using LLMs is still limited and in its
nascent stage. To address this gap, we introduce a Multi-modal Music
Understanding and Generation (M$^{2}$UGen) framework that integrates LLM's
abilities to comprehend and generate music for different modalities. The
M$^{2}$UGen framework is purpose-built to unlock creative potential from
diverse sources of inspiration, encompassing music, image, and video through
the use of pretrained MERT, ViT, and ViViT models, respectively. To enable
music generation, we explore the use of AudioLDM 2 and MusicGen. Bridging
multi-modal understanding and music generation is accomplished through the
integration of the LLaMA 2 model. Furthermore, we make use of the MU-LLaMA
model to generate extensive datasets that support text/image/video-to-music
generation, facilitating the training of our M$^{2}$UGen framework. We conduct
a thorough evaluation of our proposed framework. The experimental results
demonstrate that our model achieves or surpasses the performance of the current
state-of-the-art models. |
---|---|
DOI: | 10.48550/arxiv.2311.11255 |