Fast&Focused-Net: Enhancing Small Object Encoding With VDP Layer in Deep Neural Networks
In this paper, we introduce Fast&Focused-Net (FFN), a novel deep neural network architecture tailored for efficiently encoding small objects into fixed-length feature vectors. Contrary to conventional Convolutional Neural Networks, FFN employs a series of our newly proposed layer, the Volume...
Gespeichert in:
Veröffentlicht in: | IEEE access 2024, Vol.12, p.130603 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In this paper, we introduce Fast&Focused-Net (FFN), a novel deep neural network architecture tailored for efficiently encoding small objects into fixed-length feature vectors. Contrary to conventional Convolutional Neural Networks, FFN employs a series of our newly proposed layer, the Volume-wise Dot Product (VDP) layer, designed to address several inherent limitations of CNNs. Specifically, CNNs exhibit a smaller effective receptive field (ERF) than their theoretical counterparts, limiting their vision span. Additionally, the initial layers in CNNs produce low-dimensional feature vectors, presenting a bottleneck for subsequent learning. Lastly, the computational overhead of CNNs, particularly in capturing diverse image regions by parameter sharing, is significantly high. The VDP layer, at the heart of FFN, aims to remedy these issues by efficiently covering the entire image patch information with reduced computational. Experimental results demonstrate the prowess of FFN in a variety of applications. Our network outperformed state-of-the-art methods for small object classification tasks on datasets such as CIFAR-10, CIFAR-100, STL-10, SVHN-Cropped, and Fashion-MNIST. In the context of larger image classification, when combined with a transformer encoder (ViT), FFN produced competitive results for OpenImages V6, ImageNet-1K, and Places365 datasets. Moreover, the same combination showcased unparalleled performance in text recognition tasks across SVT, IC15, SVTP, and HOST datasets. |
---|---|
ISSN: | 2169-3536 2169-3536 |
DOI: | 10.1109/access.2024.3447888 |