ISGTA: an effective approach for multi-image stitching based on gradual transformation matrix
Image stitching is an exceedingly important branch in computer vision, especially for panoramic maps and virtual reality. Although the performance of image stitching has been significantly improved, the final stitched image still suffers from shape distortion. To overcome this limitation, this resea...
Gespeichert in:
Veröffentlicht in: | Signal, image and video processing image and video processing, 2023-10, Vol.17 (7), p.3811-3820 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Image stitching is an exceedingly important branch in computer vision, especially for panoramic maps and virtual reality. Although the performance of image stitching has been significantly improved, the final stitched image still suffers from shape distortion. To overcome this limitation, this research proposes an effective image stitching technique, the gradual transformation algorithm (ISGTA), which is based on our proposed gradual transformation matrix (GTM) to eliminate shape distortion. For images captured by a horizontally moving camera, this study assumes that only translation operations are involved in image stitching process. Specifically, a GTM is first proposed to gradually transform the global homography matrix into a translation matrix to eliminate the effects of scaling and rotation in image transformation. Secondly, a matrix approximation algorithm is proposed to obtain the minimum value of deformed energy function, thereby minimizing the shape distortion of those homography transformed regions. Finally, the ISGTA combines with the as-projective-as-possible (APAP) warp to ensure accurate alignment of overlapping areas. Meanwhile, the ISGTA can avoid the stitching failure of multi-horizontal images caused by accumulated shape distortion. Experimental results tested on captured images demonstrate the effectiveness of our proposed approach compared with state-of-the-art methods. |
---|---|
ISSN: | 1863-1703 1863-1711 |
DOI: | 10.1007/s11760-023-02609-9 |