Lifelong Visual-Tactile Spectral Clustering for Robotic Object Perception
This work presents a novel visual-tactile fused clustering framework, called L ifelong V isual- T actile S pectral C lustering (i.e., LVTSC), to effectively learn consecutive object clustering tasks for robotic perception. Lifelong learning has become an important and hot topic in recent studies on...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on circuits and systems for video technology 2023-02, Vol.33 (2), p.818-829 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | This work presents a novel visual-tactile fused clustering framework, called L ifelong V isual- T actile S pectral C lustering (i.e., LVTSC), to effectively learn consecutive object clustering tasks for robotic perception. Lifelong learning has become an important and hot topic in recent studies on machine learning, aiming to imitate "human learning" and reduce the computational cost when consecutively learning new tasks. Our proposed LVTSC model explores the knowledge transfer and representation correlation from a local modality-invariant perspective under modality-consistent constraint guidance. For the modality-invariant part, we design a set of modality-invariant basis libraries to capture the latent clustering centers of each modality and a set of modality-invariant feature libraries to forcibly embed the manifold information of each modality. A modal-consistent constraint reinforces the correlation between visual and tactile modalities by maximizing the feature manifold correspondences. When the object clustering task comes continuously, the overall objective is optimized by an effective alternating direction method with guaranteed convergence. Our proposed LVTSC framework has been extensively validated for its effectiveness and efficiency on the three challenging real-world robotic object perception datasets. |
---|---|
ISSN: | 1051-8215 1558-2205 |
DOI: | 10.1109/TCSVT.2022.3206865 |