Discriminative Transformation for Multi-Dimensional Temporal Sequences

Feature space transformation techniques have been widely studied for dimensionality reduction in vector-based feature space. However, these techniques are inapplicable to sequence data because the features in the same sequence are not independent. In this paper, we propose a method called max-min in...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on image processing 2017-07, Vol.26 (7), p.3579-3593
Hauptverfasser: Su, Bing, Ding, Xiaoqing, Liu, Changsong, Wang, Hao, Wu, Ying
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Feature space transformation techniques have been widely studied for dimensionality reduction in vector-based feature space. However, these techniques are inapplicable to sequence data because the features in the same sequence are not independent. In this paper, we propose a method called max-min inter-sequence distance analysis (MMSDA) to transform features in sequences into a low-dimensional subspace such that different sequence classes are holistically separated. To utilize the temporal dependencies, MMSDA first aligns features in sequences from the same class to an adapted number of temporal states, and then, constructs the sequence class separability based on the statistics of these ordered states. To learn the transformation, MMSDA formulates the objective of maximizing the minimal pairwise separability in the latent subspace as a semi-definite programming problem and provides a new tractable and effective solution with theoretical proofs by constraints unfolding and pruning, convex relaxation, and within-class scatter compression. Extensive experiments on different tasks have demonstrated the effectiveness of MMSDA.
ISSN:1057-7149
1941-0042
DOI:10.1109/TIP.2017.2704438