Masked Extended Attention for Zero-Shot Virtual Try-On In The Wild
Virtual Try-On (VTON) is a highly active line of research, with increasing demand. It aims to replace a piece of garment in an image with one from another, while preserving person and garment characteristics as well as image fidelity. Current literature takes a supervised approach for the task, impa...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Virtual Try-On (VTON) is a highly active line of research, with increasing
demand. It aims to replace a piece of garment in an image with one from
another, while preserving person and garment characteristics as well as image
fidelity. Current literature takes a supervised approach for the task,
impairing generalization and imposing heavy computation. In this paper, we
present a novel zero-shot training-free method for inpainting a clothing
garment by reference. Our approach employs the prior of a diffusion model with
no additional training, fully leveraging its native generalization
capabilities. The method employs extended attention to transfer image
information from reference to target images, overcoming two significant
challenges. We first initially warp the reference garment over the target human
using deep features, alleviating "texture sticking". We then leverage the
extended attention mechanism with careful masking, eliminating leakage of
reference background and unwanted influence. Through a user study, qualitative,
and quantitative comparison to state-of-the-art approaches, we demonstrate
superior image quality and garment preservation compared unseen clothing pieces
or human figures. |
---|---|
DOI: | 10.48550/arxiv.2406.15331 |