Learning latent geometric consistency for 6D object pose estimation in heavily cluttered scenes
•A dual-stream deep learning network is proposed for 6D object pose estimation.•Latent geometric consistency is learned to enforce structural constraints.•The proposed scheme is robust to heavy occlusion and segmentation errors.•Pairwise dense features are generated from two frames of different view...
Gespeichert in:
Veröffentlicht in: | Journal of visual communication and image representation 2020-07, Vol.70, p.102790, Article 102790 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | •A dual-stream deep learning network is proposed for 6D object pose estimation.•Latent geometric consistency is learned to enforce structural constraints.•The proposed scheme is robust to heavy occlusion and segmentation errors.•Pairwise dense features are generated from two frames of different viewing angles.
6D object pose (3D rotation and translation) estimation from RGB-D image is an important and challenging task in computer vision and has been widely applied in a variety of applications such as robotic manipulation, autonomous driving, augmented reality etc. Prior works extract global feature or reason about local appearance from an individual frame, which neglect the spatial geometric relevance between two frames, limiting their performance for occluded or truncated objects in heavily cluttered scenes. In this paper, we present a dual-stream network for estimating 6D pose of a set of known objects from RGB-D images. Our novelty stands in contrast to prior work that learns latent geometric consistency in pairwise dense feature representations from multiple observations of the same objects in a self-supervised manner. We show in experiments that our method outperforms state-of-the-art approaches on 6D object pose estimation in two challenging datasets, YCB-Video and LineMOD. |
---|---|
ISSN: | 1047-3203 1095-9076 |
DOI: | 10.1016/j.jvcir.2020.102790 |