RoCoTex: A Robust Method for Consistent Texture Synthesis with Diffusion Models
Text-to-texture generation has recently attracted increasing attention, but existing methods often suffer from the problems of view inconsistencies, apparent seams, and misalignment between textures and the underlying mesh. In this paper, we propose a robust text-to-texture method for generating con...
Gespeichert in:
Hauptverfasser: | , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Text-to-texture generation has recently attracted increasing attention, but
existing methods often suffer from the problems of view inconsistencies,
apparent seams, and misalignment between textures and the underlying mesh. In
this paper, we propose a robust text-to-texture method for generating
consistent and seamless textures that are well aligned with the mesh. Our
method leverages state-of-the-art 2D diffusion models, including SDXL and
multiple ControlNets, to capture structural features and intricate details in
the generated textures. The method also employs a symmetrical view synthesis
strategy combined with regional prompts for enhancing view consistency.
Additionally, it introduces novel texture blending and soft-inpainting
techniques, which significantly reduce the seam regions. Extensive experiments
demonstrate that our method outperforms existing state-of-the-art methods. |
---|---|
DOI: | 10.48550/arxiv.2409.19989 |