TextToon: Real-Time Text Toonify Head Avatar from Single Video
We propose TextToon, a method to generate a drivable toonified avatar. Given a short monocular video sequence and a written instruction about the avatar style, our model can generate a high-fidelity toonified avatar that can be driven in real-time by another video with arbitrary identities. Existing...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We propose TextToon, a method to generate a drivable toonified avatar. Given
a short monocular video sequence and a written instruction about the avatar
style, our model can generate a high-fidelity toonified avatar that can be
driven in real-time by another video with arbitrary identities. Existing
related works heavily rely on multi-view modeling to recover geometry via
texture embeddings, presented in a static manner, leading to control
limitations. The multi-view video input also makes it difficult to deploy these
models in real-world applications. To address these issues, we adopt a
conditional embedding Tri-plane to learn realistic and stylized facial
representations in a Gaussian deformation field. Additionally, we expand the
stylization capabilities of 3D Gaussian Splatting by introducing an adaptive
pixel-translation neural network and leveraging patch-aware contrastive
learning to achieve high-quality images. To push our work into consumer
applications, we develop a real-time system that can operate at 48 FPS on a GPU
machine and 15-18 FPS on a mobile machine. Extensive experiments demonstrate
the efficacy of our approach in generating textual avatars over existing
methods in terms of quality and real-time animation. Please refer to our
project page for more details: https://songluchuan.github.io/TextToon/. |
---|---|
DOI: | 10.48550/arxiv.2410.07160 |