Moonshine: Distilling Game Content Generators into Steerable Generative Models
Procedural Content Generation via Machine Learning (PCGML) has enhanced game content creation, yet challenges in controllability and limited training data persist. This study addresses these issues by distilling a constructive PCG algorithm into a controllable PCGML model. We first generate a large...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Procedural Content Generation via Machine Learning (PCGML) has enhanced game
content creation, yet challenges in controllability and limited training data
persist. This study addresses these issues by distilling a constructive PCG
algorithm into a controllable PCGML model. We first generate a large amount of
content with a constructive algorithm and label it using a Large Language Model
(LLM). We use these synthetic labels to condition two PCGML models for
content-specific generation, a diffusion model and the five-dollar model. This
neural network distillation process ensures that the generation aligns with the
original algorithm while introducing controllability through plain text. We
define this text-conditioned PCGML as a Text-to-game-Map (T2M) task, offering
an alternative to prevalent text-to-image multi-modal tasks. We compare our
distilled models with the baseline constructive algorithm. Our analysis of the
variety, accuracy, and quality of our generation demonstrates the efficacy of
distilling constructive methods into controllable text-conditioned PCGML
models. |
---|---|
DOI: | 10.48550/arxiv.2408.09594 |