UMFuse: Unified Multi View Fusion for Human Editing applications
Numerous pose-guided human editing methods have been explored by the vision community due to their extensive practical applications. However, most of these methods still use an image-to-image formulation in which a single image is given as input to produce an edited image as output. This objective b...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Numerous pose-guided human editing methods have been explored by the vision
community due to their extensive practical applications. However, most of these
methods still use an image-to-image formulation in which a single image is
given as input to produce an edited image as output. This objective becomes
ill-defined in cases when the target pose differs significantly from the input
pose. Existing methods then resort to in-painting or style transfer to handle
occlusions and preserve content. In this paper, we explore the utilization of
multiple views to minimize the issue of missing information and generate an
accurate representation of the underlying human model. To fuse knowledge from
multiple viewpoints, we design a multi-view fusion network that takes the pose
key points and texture from multiple source images and generates an explainable
per-pixel appearance retrieval map. Thereafter, the encodings from a separate
network (trained on a single-view human reposing task) are merged in the latent
space. This enables us to generate accurate, precise, and visually coherent
images for different editing tasks. We show the application of our network on
two newly proposed tasks - Multi-view human reposing and Mix&Match Human Image
generation. Additionally, we study the limitations of single-view editing and
scenarios in which multi-view provides a better alternative. |
---|---|
DOI: | 10.48550/arxiv.2211.10157 |