Flower classification with modified multimodal convolutional neural networks

•We constructed a convolutional neural network model to use image and text together.•We proposed new flower classification algorithm using a multi-view learning method.•We proposed multimodal feature extraction framework in learned expression. The new multi-view learning algorithm is proposed by mod...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Expert systems with applications 2020-11, Vol.159, p.113455, Article 113455
Hauptverfasser: Bae, Kang Il, Park, Junghoon, Lee, Jongga, Lee, Yungseop, Lim, Changwon
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:•We constructed a convolutional neural network model to use image and text together.•We proposed new flower classification algorithm using a multi-view learning method.•We proposed multimodal feature extraction framework in learned expression. The new multi-view learning algorithm is proposed by modifying an existing method, the multimodal convolutional neural networks originally developed for image-text matching (modified m-CNN), to use not only images but also texts for classification. Firstly, pre-trained CNN and word embedding models are applied to extract visual features and represent each word in a text as a vector, respectively. Secondly, textual features are extracted by employing a CNN model for text data. Finally, pairs of features extracted through the text and image CNNs are concatenated and input to convolutional layer which can obtain a better learn of the important feature information in the integrated representation of image and text. Features extracted from the convolutional layer are input to a fully connected layer to perform classification. Experimental results demonstrate that the proposed algorithm can obtain superior performance compared with other data fusion methods for flower classification using data of images of flowers and their Korean descriptions. More specifically, the accuracy of the proposed algorithm is 10.1% and 14.5% higher than m-CNN and multimodal recurrent neural networks algorithms, respectively. The proposed method can significantly improve the performance of flower classification. The code and related data are publicly available via our GitHub repository.
ISSN:0957-4174
1873-6793
DOI:10.1016/j.eswa.2020.113455