Correspondence Matching of Multi-View Video Sequences Using Mutual Information Based Similarity Measure

We propose a correspondence matching algorithm for multi-view video sequences, which provides reliable performance even when the multiple cameras have significantly different parameters, such as viewing angles and positions. We use an activity vector, which represents the temporal occurrence pattern...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on multimedia 2013-12, Vol.15 (8), p.1719-1731
Hauptverfasser: Lee, Soon-Young, Sim, Jae-Young, Kim, Chang-Su, Lee, Sang-Uk
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:We propose a correspondence matching algorithm for multi-view video sequences, which provides reliable performance even when the multiple cameras have significantly different parameters, such as viewing angles and positions. We use an activity vector, which represents the temporal occurrence pattern of moving foreground objects at a pixel position, as an invariant feature for correspondence matching. We first devise a novel similarity measure between activity vectors by considering the joint and individual behavior of the activity vectors. Specifically, we define random variables associated with the activity vectors and measure their similarity using the mutual information between the random variables. Moreover, to find a reliable homography transform between views, we find consistent pixel positions by employing the iterative bidirectional matching. We also refine the matching results of multiple source pixel positions by minimizing a matching cost function based on the Markov random field. Experimental results show that the proposed algorithm provides more accurate and reliable matching performance than the conventional activity-based and feature-based matching algorithms, and therefore can facilitate various applications of visual sensor networks.
ISSN:1520-9210
1941-0077
DOI:10.1109/TMM.2013.2271747