Amodal Panoptic Segmentation
Humans have the remarkable ability to perceive objects as a whole, even when parts of them are occluded. This ability of amodal perception forms the basis of our perceptual and cognitive understanding of our world. To enable robots to reason with this capability, we formulate and propose a novel tas...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Humans have the remarkable ability to perceive objects as a whole, even when
parts of them are occluded. This ability of amodal perception forms the basis
of our perceptual and cognitive understanding of our world. To enable robots to
reason with this capability, we formulate and propose a novel task that we name
amodal panoptic segmentation. The goal of this task is to simultaneously
predict the pixel-wise semantic segmentation labels of the visible regions of
stuff classes and the instance segmentation labels of both the visible and
occluded regions of thing classes. To facilitate research on this new task, we
extend two established benchmark datasets with pixel-level amodal panoptic
segmentation labels that we make publicly available as KITTI-360-APS and
BDD100K-APS. We present several strong baselines, along with the amodal
panoptic quality (APQ) and amodal parsing coverage (APC) metrics to quantify
the performance in an interpretable manner. Furthermore, we propose the novel
amodal panoptic segmentation network (APSNet), as a first step towards
addressing this task by explicitly modeling the complex relationships between
the occluders and occludes. Extensive experimental evaluations demonstrate that
APSNet achieves state-of-the-art performance on both benchmarks and more
importantly exemplifies the utility of amodal recognition. The benchmarks are
available at http://amodal-panoptic.cs.uni-freiburg.de. |
---|---|
DOI: | 10.48550/arxiv.2202.11542 |