Incorporating attentive multi-scale context information for image captioning

In this paper, we propose a novel encoding framework to learn the multi-scale context information of the visual scene for image captioning task. The devised multi-scale context information constitutes spatial, semantic, and instance level features of an input mage. We draw spatial features from earl...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Multimedia tools and applications 2023-03, Vol.82 (7), p.10017-10037
Hauptverfasser: Prudviraj, Jeripothula, Sravani, Yenduri, Mohan, C. Krishna
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:In this paper, we propose a novel encoding framework to learn the multi-scale context information of the visual scene for image captioning task. The devised multi-scale context information constitutes spatial, semantic, and instance level features of an input mage. We draw spatial features from early convolutional layers, and multi-scale semantic features are achieved by employing a feature pyramid network on top of deep convolutional neural networks. Then, we concatenate the spatial and multi-scale semantic features to harvest fine-to-coarse details of the visual scene. Further, the instance level features are captured by employing a bi-linear interpolation technique on fused representation to hold object-level semantics of an image. We exploit an attention mechanism on attained features to guide the caption decoding module. In addition, we explore various combinations of encoding techniques to acquire global and local features of an image. The efficacy of the proposed approaches is demonstrated on the COCO dataset.
ISSN:1380-7501
1573-7721
DOI:10.1007/s11042-021-11895-9