RepConv : a novel architecture for image scene classification on Intel scenes dataset
Image understanding and scene classification are keystone tasks in computer vision. the advancement of technology and the abundance of available datasets in the field of image classification and recognition study provide plenty of attempts for advancement. in the scene classification problem, transf...
Gespeichert in:
Veröffentlicht in: | International Journal of Intelligent Computing and Information Sciences 2022-04, Vol.22 (2), p.63-73 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Image understanding and scene classification are keystone tasks in computer vision. the advancement of technology and the abundance of available datasets in the field of image classification and recognition study provide plenty of attempts for advancement. in the scene classification problem, transfer learning is commonly utilized as a branch of machine learning. despite existing machine learning models' superior performance in image interpretation and scene classification, there are still challenges to overcome. the weights and current models aren't suitable in most circumstances. instead of using the weights of data-dependent models, in this work, a novel machine learning model for the scene classification task is provided that converges rapidly. the proposed model has been tested on the Intel scenes dataset for a comprehensive evaluation of our model. the proposed model RepConv over-performed four existing benchmark models in a low number of epochs and training parameters, and it achieved 93.55 ± 0.11, 75.54 ± 0.14 accuracies for training and validation data respectively. furthermore, re-categorization of the data set is performed for a new classification problem that is not previously reported in the literature (natural scenes ; real scenes). the accuracy of the proposed model on the binary model was 98.08 ± 0.05 on training data and 92.70 ± 0.08 on validation data which is not reported previously in any other publication. |
---|---|
ISSN: | 1687-109X 2535-1710 2535-1710 |
DOI: | 10.21608/ijicis.2022.118834.1163 |