DreamCraft: Text-Guided Generation of Functional 3D Environments in Minecraft
Procedural Content Generation (PCG) algorithms enable the automatic generation of complex and diverse artifacts. However, they don't provide high-level control over the generated content and typically require domain expertise. In contrast, text-to-3D methods allow users to specify desired chara...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Procedural Content Generation (PCG) algorithms enable the automatic
generation of complex and diverse artifacts. However, they don't provide
high-level control over the generated content and typically require domain
expertise. In contrast, text-to-3D methods allow users to specify desired
characteristics in natural language, offering a high amount of flexibility and
expressivity. But unlike PCG, such approaches cannot guarantee functionality,
which is crucial for certain applications like game design. In this paper, we
present a method for generating functional 3D artifacts from free-form text
prompts in the open-world game Minecraft. Our method, DreamCraft, trains
quantized Neural Radiance Fields (NeRFs) to represent artifacts that, when
viewed in-game, match given text descriptions. We find that DreamCraft produces
more aligned in-game artifacts than a baseline that post-processes the output
of an unconstrained NeRF. Thanks to the quantized representation of the
environment, functional constraints can be integrated using specialized loss
terms. We show how this can be leveraged to generate 3D structures that match a
target distribution or obey certain adjacency rules over the block types.
DreamCraft inherits a high degree of expressivity and controllability from the
NeRF, while still being able to incorporate functional constraints through
domain-specific objectives. |
---|---|
DOI: | 10.48550/arxiv.2404.15538 |