Object Recognition Using Deep Convolutional Features Transformed by a Recursive Network Structure

Deep neural networks (DNNs) trained on large data sets have been shown to be able to capture high-quality features describing image data. Numerous studies have proposed various ways to transfer DNN structures trained on large data sets to perform classification tasks represented by relatively small...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE access 2016, Vol.4, p.10059-10066
Hauptverfasser: Hieu Minh Bui, Lech, Margaret, Cheng, Eva, Neville, Katrina, Burnett, Ian S.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Deep neural networks (DNNs) trained on large data sets have been shown to be able to capture high-quality features describing image data. Numerous studies have proposed various ways to transfer DNN structures trained on large data sets to perform classification tasks represented by relatively small data sets. Due to the limitations of these proposals, it is not well known how to effectively adapt the pre-trained model into the new task. Typically, the transfer process uses a combination of fine-tuning and training of adaptation layers; however, both tasks are susceptible to problems with data shortage and high computational complexity. This paper proposes an improvement to the well-known AlexNet feature extraction technique. The proposed approach applies a recursive neural network structure on features extracted by a deep convolutional neural network pre-trained on a large data set. Object recognition experiments conducted on the Washington RGBD image data set have shown that the proposed method has the advantages of structural simplicity combined with the ability to provide higher recognition accuracy at a low computational cost compared with other relevant methods. The new approach requires no training at the feature extraction phase, and can be performed very efficiently as the output features are compact and highly discriminative, and can be used with a simple classifier in object recognition settings.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2016.2639543