Self-supervised Transfer Learning for Instance Segmentation through Physical Interaction
Instance segmentation of unknown objects from images is regarded as relevant for several robot skills including grasping, tracking and object sorting. Recent results in computer vision have shown that large hand-labeled datasets enable high segmentation performance. To overcome the time-consuming pr...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Instance segmentation of unknown objects from images is regarded as relevant
for several robot skills including grasping, tracking and object sorting.
Recent results in computer vision have shown that large hand-labeled datasets
enable high segmentation performance. To overcome the time-consuming process of
manually labeling data for new environments, we present a transfer learning
approach for robots that learn to segment objects by interacting with their
environment in a self-supervised manner. Our robot pushes unknown objects on a
table and uses information from optical flow to create training labels in the
form of object masks. To achieve this, we fine-tune an existing DeepMask
network for instance segmentation on the self-labeled training data acquired by
the robot. We evaluate our trained network (SelfDeepMask) on a set of real
images showing challenging and cluttered scenes with novel objects. Here,
SelfDeepMask outperforms the DeepMask network trained on the COCO dataset by
9.5% in average precision. Furthermore, we combine our approach with recent
approaches for training with noisy labels in order to better cope with induced
label noise. |
---|---|
DOI: | 10.48550/arxiv.2005.09484 |