In&Out : Diverse Image Outpainting via GAN Inversion
Image outpainting seeks for a semantically consistent extension of the input image beyond its available content. Compared to inpainting -- filling in missing pixels in a way coherent with the neighboring pixels -- outpainting can be achieved in more diverse ways since the problem is less constrained...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Image outpainting seeks for a semantically consistent extension of the input
image beyond its available content. Compared to inpainting -- filling in
missing pixels in a way coherent with the neighboring pixels -- outpainting can
be achieved in more diverse ways since the problem is less constrained by the
surrounding pixels. Existing image outpainting methods pose the problem as a
conditional image-to-image translation task, often generating repetitive
structures and textures by replicating the content available in the input
image. In this work, we formulate the problem from the perspective of inverting
generative adversarial networks. Our generator renders micro-patches
conditioned on their joint latent code as well as their individual positions in
the image. To outpaint an image, we seek for multiple latent codes not only
recovering available patches but also synthesizing diverse outpainting by
patch-based generation. This leads to richer structure and content in the
outpainted regions. Furthermore, our formulation allows for outpainting
conditioned on the categorical input, thereby enabling flexible user controls.
Extensive experimental results demonstrate the proposed method performs
favorably against existing in- and outpainting methods, featuring higher visual
quality and diversity. |
---|---|
DOI: | 10.48550/arxiv.2104.00675 |