Learning Low-Rank Representation Approximation for Few-Shot Deep Subspace Clustering
As one of the most effective subspace clustering methods, the self-expression based sparsity method leverages the robust representational learning and non-linear transformation capacities of deep learning. This approach facilitates the mapping of data into a low-dimensional subspace, wherein the clu...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on circuits and systems for video technology 2024-11, Vol.34 (11), p.10590-10603 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | As one of the most effective subspace clustering methods, the self-expression based sparsity method leverages the robust representational learning and non-linear transformation capacities of deep learning. This approach facilitates the mapping of data into a low-dimensional subspace, wherein the clustering operations are subsequently executed. However, most conventional self-expression methods do not handle the subspace clustering problem with sparse-labeled information. Considering the scarcity and value of labeled samples in various real-world applications, we propose a novel deep Few-Shot Subspace Clustering Learning (FS2CL) framework to improve the traditional self-expression-based techniques in the case of sparse label information, in which partial classes in the observation dataset have a scarcity of labeled samples and most other classes do not. We expect to obtain more discriminative low-rank representations that exhibit high cohesion among clusters. To overcome the limitation that the low-rank approximation is achieved by singular value decomposition, which is not differentiable and cannot be embedded in neural networks for gradient backpropagation, a Low-rank Representation Approximation (LRA) module is proposed to transform the non-differentiable singular value decomposition into a differentiable iterative process. This procedure produces a low-rank representation that maximizes the cohesion of features belonging to the same cluster. Subsequently, we propose a method for learning a low-dimensional learnable subspace bases matrix assisted by a small number of labeled samples, which captures the structure of each subspace. We then classify the data points belonging to the corresponding class by measuring the similarity between the instance and each subspace base. Due to the low dimension of the subspace bases matrix, it is possible to apply our method to large-scale datasets. The proposed method is superior to state-of-the-art clustering approaches through extensive comparison studies conducted on six benchmark datasets: MNIST, Fashion-MNIST, REUTERS-10K, STL-10, CIFAR10, and CIFAR100. |
---|---|
ISSN: | 1051-8215 1558-2205 |
DOI: | 10.1109/TCSVT.2024.3411615 |