Establishing Keypoint Matches on Multimodal Images With Bootstrap Strategy and Global Information

This paper proposes an algorithm of building keypoint matches on multimodal images by combining a bootstrap process and global information. The correct ratio of keypoint matches built with descriptors is typically very low on multimodal images of large spectral difference. To identify correct matche...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on image processing 2017-06, Vol.26 (6), p.3064-3076
Hauptverfasser: Li, Yong, Jin, Hongbin, Wu, Jiatao, Liu, Jie
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:This paper proposes an algorithm of building keypoint matches on multimodal images by combining a bootstrap process and global information. The correct ratio of keypoint matches built with descriptors is typically very low on multimodal images of large spectral difference. To identify correct matches, global information is utilized for evaluating keypoint matches and a bootstrap technique is employed to reduce the computational cost. A keypoint match determines a transformation T and a similarity metric between the reference and the transformed test image by T. The similarity metric encodes global information over entire images, and hence, a higher similarity indicates the match can bring more image content into alignment, implying it tends to be correct. Unfortunately, exhausting triplets/quadruples of matches for affine/projective transformation is computationally intractable, when the number of keypoints is large. To reduce the computational cost, a bootstrap technique is employed that starts from single matches for a translation and rotation model, and goes increasingly to quadruples of four matches for a projective model. The global information screens for "good" matches at each stage and the bootstrap strategy makes the screening process computationally feasible. Experimental results show that the proposed method can establish reliable keypoint matches on challenging multimodal images of strong multimodality.
ISSN:1057-7149
1941-0042
DOI:10.1109/TIP.2017.2695885