An adaptive music generation architecture for games based on the deep learning Transformer mode
This paper presents an architecture for generating music for video games based on the Transformer deep learning model. Our motivation is to be able to customize the generation according to the taste of the player, who can select a corpus of training examples, corresponding to his preferred musical s...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | This paper presents an architecture for generating music for video games
based on the Transformer deep learning model. Our motivation is to be able to
customize the generation according to the taste of the player, who can select a
corpus of training examples, corresponding to his preferred musical style. The
system generates various musical layers, following the standard layering
strategy currently used by composers designing video game music. To adapt the
music generated to the game play and to the player(s) situation, we are using
an arousal-valence model of emotions, in order to control the selection of
musical layers. We discuss current limitations and prospects for the future,
such as collaborative and interactive control of the musical components. |
---|---|
DOI: | 10.48550/arxiv.2207.01698 |