Kernelized Saliency-Based Person Re-Identification Through Multiple Metric Learning

Person re-identification in a non-overlapping multi-camera scenario is an open and interesting challenge. While the task can hardly be completed by machines, we, as humans, are inherently able to sample those relevant persons' details that allow us to correctly solve the problem in a fraction o...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on image processing 2015-12, Vol.24 (12), p.5645-5658
Hauptverfasser: Martinel, Niki, Micheloni, Christian, Foresti, Gian Luca
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Person re-identification in a non-overlapping multi-camera scenario is an open and interesting challenge. While the task can hardly be completed by machines, we, as humans, are inherently able to sample those relevant persons' details that allow us to correctly solve the problem in a fraction of a second. Thus, knowing where a human might fixate to recognize a person is of paramount interest for re-identification. Inspired by the human gazing capabilities, we want to identify the salient regions of a person appearance to tackle the problem. Toward this objective, we introduce the following main contributions. A kernelized graph-based approach is used to detect the salient regions of a person appearance, later used as a weighting tool in the feature extraction process. The proposed person representation combines visual features either considering or not the saliency. These are then exploited in a pairwise-based multiple metric learning framework. Finally, the non-Euclidean metrics that have been separately learned for each feature are fused to re-identify a person. The proposed kernelized saliency-based person re-identification through multiple metric learning has been evaluated on four publicly available benchmark data sets to show its superior performance over the state-of-the-art approaches (e.g., it achieves a rank 1 correct recognition rate of 42.41% on the VIPeR data set).
ISSN:1057-7149
1941-0042
DOI:10.1109/TIP.2015.2487048