Reference Based Face Super-Resolution
Despite the great progress of image super-resolution in recent years, face super-resolution has still much room to explore good visual quality while preserving original facial attributes for larger up-scaling factors. This paper investigates a new research direction in face super-resolution, called...
Gespeichert in:
Veröffentlicht in: | IEEE access 2019, Vol.7, p.129112-129126 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Despite the great progress of image super-resolution in recent years, face super-resolution has still much room to explore good visual quality while preserving original facial attributes for larger up-scaling factors. This paper investigates a new research direction in face super-resolution, called Reference based face Super-Resolution (RefSR), in which a reference facial image containing genuine attributes is provided in addition to the low-resolution images for super-resolution. We focus on transferring the key information extracted from reference facial images to the super-resolution process to guarantee the content similarity between the reference and super-resolution image. We propose a novel Conditional Variational AutoEncoder model for this Reference based Face Super-Resolution (RefSR-VAE). By using the encoder to map the reference image to the joint latent space, we can then use the decoder to sample the encoder results to super-resolve low-resolution facial images to generate super-resolution images with good visual quality. We create a benchmark dataset on reference based face super-resolution (RefSR-Face) for general research use, which contains reference images paired with low-resolution images of various pose, emotions, ages and appearance. Both objective and subjective evaluations were conducted, which demonstrate the great potential of using reference images for face super-resolution. By comparing it with state-of-the-art super-resolution approaches, our proposed approach also achieves superior performance. |
---|---|
ISSN: | 2169-3536 2169-3536 |
DOI: | 10.1109/ACCESS.2019.2934078 |