Object Description Using Visual and Tactile Data

With the development of vision and haptic sensor technologies, robots have become increasingly capable of perceiving their external environment. Although machine vision and haptics have surpassed humans in some aspects of perception, it is difficult for robots to describe objects from multiple viewp...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE access 2022, Vol.10, p.54525-54536
Hauptverfasser: Zhang, Peng, Zhou, Maohui, Shan, Dongri, Chen, Zhenxue, Wang, Xiaofang
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:With the development of vision and haptic sensor technologies, robots have become increasingly capable of perceiving their external environment. Although machine vision and haptics have surpassed humans in some aspects of perception, it is difficult for robots to describe objects from multiple viewpoints using a combination of visual and haptic modalities. In this study, we use convolutional neural networks to separately extract visual and haptic features and then fuse these two types of features. Then, multitask learning is combined with multilabel classification to form a multitask-multilabel classification method. The developed method is used to identify the color, shape, material attributes, and class of an object from the visual-haptic fused feature vector. To verify the effectiveness of the proposed object description method, experiments are conducted on the PHAC-2 dataset and the collected VHAC dataset. The experimental results show that the proposed method produces the most accurate object descriptions with the smallest number of parameters.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2022.3174874