Large-Scale 3D Shape Reconstruction and Segmentation from ShapeNet Core55

We introduce a large-scale 3D shape understanding benchmark using data and annotation from ShapeNet 3D object database. The benchmark consists of two tasks: part-level segmentation of 3D shapes and 3D reconstruction from single view images. Ten teams have participated in the challenge and the best p...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2017-10
Hauptverfasser: Li, Yi, Shao, Lin, Savva, Manolis, Huang, Haibin, Zhou, Yang, Wang, Qirui, Graham, Benjamin, Engelcke, Martin, Klokov, Roman, Lempitsky, Victor, Gan, Yuan, Wang, Pengyu, Liu, Kun, Yu, Fenggen, Panpan Shui, Hu, Bingyang, Zhang, Yan, Li, Yangyan, Bu, Rui, Sun, Mingchao, Wu, Wei, Jeong, Minki, Choi, Jaehoon, Kim, Changick, Angom Geetchandra, Murthy, Narasimha, Bhargava Ramu, Bharadwaj Manda, Ramanathan, M, Kumar, Gautam, Preetham, P, Srivastava, Siddharth, Bhugra, Swati, Lall, Brejesh, Haene, Christian, Tulsiani, Shubham, Malik, Jitendra, Lafer, Jared, Jones, Ramsey, Li, Siyuan, Lu, Jie, Shi, Jin, Yu, Jingyi, Huang, Qixing, Kalogerakis, Evangelos, Savarese, Silvio, Hanrahan, Pat, Funkhouser, Thomas, Su, Hao, Guibas, Leonidas
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:We introduce a large-scale 3D shape understanding benchmark using data and annotation from ShapeNet 3D object database. The benchmark consists of two tasks: part-level segmentation of 3D shapes and 3D reconstruction from single view images. Ten teams have participated in the challenge and the best performing teams have outperformed state-of-the-art approaches on both tasks. A few novel deep learning architectures have been proposed on various 3D representations on both tasks. We report the techniques used by each team and the corresponding performances. In addition, we summarize the major discoveries from the reported results and possible trends for the future work in the field.
ISSN:2331-8422