Training deep convolutional neural networks to acquire the best view of a 3D shape

In a 3D shape retrieval system, when attempting to select the best view from many view images, the ability to project a 3D shape into related view images from multiple viewpoints is important. Furthermore, learning the best view from benchmark sketch datasets is one of the best approaches to acquire...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Multimedia tools and applications 2020, Vol.79 (1-2), p.581-601
Hauptverfasser: Zhou, Wen, Jia, Jinyuan
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:In a 3D shape retrieval system, when attempting to select the best view from many view images, the ability to project a 3D shape into related view images from multiple viewpoints is important. Furthermore, learning the best view from benchmark sketch datasets is one of the best approaches to acquire the best view of a 3D shape. In this paper, we propose a learning framework based on deep neural networks to obtain the best shape views. We apply transfer learning to obtain features, i.e., we use two Alex convolutional neural networks (CNNs) for feature extraction: one for the view images and the other for the sketches. Specifically, the connections to learn an automatic best-view selector for different types of 3D shapes are obtained through the proposed learning framework. We perform training on the Shape Retrieval Contest’s 2014 Sketch Track Benchmark (SHREC’14) to capture the related rules. Finally, we report experiments to demonstrate the feasibility of our approach. In addition, to better evaluate our proposed framework and show its superiority, we apply our proposed approach to a sketch-based model retrieval task, where it outperforms other state-of-the-art methods.
ISSN:1380-7501
1573-7721
DOI:10.1007/s11042-019-08107-w