Edge-guided Composition Network for Image Stitching

•We propose a novel deep learning framework for composition in image stitching.•We illustrate the importance of structure consistency preserving in image composition, and leverage the structure prior provided by a proposed perceptual edge branch to further enhance the composition performance.•We bui...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Pattern recognition 2021-10, Vol.118, p.108019, Article 108019
Hauptverfasser: Dai, Qinyan, Fang, Faming, Li, Juncheng, Zhang, Guixu, Zhou, Aimin
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:•We propose a novel deep learning framework for composition in image stitching.•We illustrate the importance of structure consistency preserving in image composition, and leverage the structure prior provided by a proposed perceptual edge branch to further enhance the composition performance.•We build a Real Image Stitching Dataset (RISD), which is the first general-purpose benchmark for image stitching training. Panorama creation is still challenging in consumer-level photography because of varying conditions of image capturing. A long-standing problem is the presence of artifacts caused by structure inconsistent image transitions. Since it is difficult to achieve perfect alignment in unconstrained shooting environment especially with parallax and object movements, image composition becomes a crucial step to produce artifact-free stitching results. Current energy-based seam-cutting image composition approaches are limited by the hand-crafted features, which are not discriminative and adaptive enough to robustly create structure consistent image transitions. In this paper, we present the first end-to-end deep learning framework named Edge Guided Composition Network (EGCNet) for the composition stage in image stitching. We cast the whole composition stage as an image blending problem, and aims to regress the blending weights to seamlessly produce the stitched image. To better preserve the structure consistency, we exploit perceptual edges to guide the network with additional geometric prior. Specifically, we introduce a perceptual edge branch to integrate edge features into the model and propose two edge-aware losses for edge guidance. Meanwhile, we gathered a general-purpose dataset for image stitching training and evaluation (namely, RISD). Extensive experiments demonstrate that our EGCNet produces plausible results with less running time, and outperforms traditional methods especially under the circumstances of parallax and object motions.
ISSN:0031-3203
1873-5142
DOI:10.1016/j.patcog.2021.108019