Panoptic Segmentation using Synthetic and Real Data
Being able to understand the relations between the user and the surrounding environment is instrumental to assist users in a worksite. For instance, understanding which objects a user is interacting with from images and video collected through a wearable device can be useful to inform the worker on...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Being able to understand the relations between the user and the surrounding
environment is instrumental to assist users in a worksite. For instance,
understanding which objects a user is interacting with from images and video
collected through a wearable device can be useful to inform the worker on the
usage of specific objects in order to improve productivity and prevent
accidents. Despite modern vision systems can rely on advanced algorithms for
object detection, semantic and panoptic segmentation, these methods still
require large quantities of domain-specific labeled data, which can be
difficult to obtain in industrial scenarios. Motivated by this observation, we
propose a pipeline which allows to generate synthetic images from 3D models of
real environments and real objects. The generated images are automatically
labeled and hence effortless to obtain. Exploiting the proposed pipeline, we
generate a dataset comprising synthetic images automatically labeled for
panoptic segmentation. This set is complemented by a small number of manually
labeled real images for fine-tuning. Experiments show that the use of synthetic
images allows to drastically reduce the number of real images needed to obtain
reasonable panoptic segmentation performance. |
---|---|
DOI: | 10.48550/arxiv.2204.07069 |