Semantic-aware scene recognition
•A novel approach for scene recognition based on an end-to-end multi-modal CNN that combines image and context information by means of an attention module.•Context information, in the shape of semantic segmentation, is used to gate features extracted from an RGB image.•The gating process reinforces...
Gespeichert in:
Veröffentlicht in: | Pattern recognition 2020-06, Vol.102, p.107256, Article 107256 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | •A novel approach for scene recognition based on an end-to-end multi-modal CNN that combines image and context information by means of an attention module.•Context information, in the shape of semantic segmentation, is used to gate features extracted from an RGB image.•The gating process reinforces the learning of indicative scene content and enhances scene disambiguation.•The proposed approach outperforms state-of-the-art performance while significantly reducing the number of network parameters.
Scene recognition is currently one of the top-challenging research fields in computer vision. This may be due to the ambiguity between classes: images of several scene classes may share similar objects, which causes confusion among them. The problem is aggravated when images of a particular scene class are notably different. Convolutional Neural Networks (CNNs) have significantly boosted performance in scene recognition, albeit it is still far below from other recognition tasks (e.g., object or image recognition). In this paper, we describe a novel approach for scene recognition based on an end-to-end multi-modal CNN that combines image and context information by means of an attention module. Context information, in the shape of a semantic segmentation, is used to gate features extracted from the RGB image by leveraging on information encoded in the semantic representation: the set of scene objects and stuff, and their relative locations. This gating process reinforces the learning of indicative scene content and enhances scene disambiguation by refocusing the receptive fields of the CNN towards them. Experimental results on three publicly available datasets show that the proposed approach outperforms every other state-of-the-art method while significantly reducing the number of network parameters. All the code and data used along this paper is available at: https://github.com/vpulab/Semantic-Aware-Scene-Recognition |
---|---|
ISSN: | 0031-3203 1873-5142 |
DOI: | 10.1016/j.patcog.2020.107256 |