PKCAM: Previous Knowledge Channel Attention Module
Recently, attention mechanisms have been explored with ConvNets, both across the spatial and channel dimensions. However, from our knowledge, all the existing methods devote the attention modules to capture local interactions from a uni-scale. In this paper, we propose a Previous Knowledge Channel A...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Recently, attention mechanisms have been explored with ConvNets, both across
the spatial and channel dimensions. However, from our knowledge, all the
existing methods devote the attention modules to capture local interactions
from a uni-scale. In this paper, we propose a Previous Knowledge Channel
Attention Module(PKCAM), that captures channel-wise relations across different
layers to model the global context. Our proposed module PKCAM is easily
integrated into any feed-forward CNN architectures and trained in an end-to-end
fashion with a negligible footprint due to its lightweight property. We
validate our novel architecture through extensive experiments on image
classification and object detection tasks with different backbones. Our
experiments show consistent improvements in performances against their
counterparts. Our code is published at https://github.com/eslambakr/EMCA. |
---|---|
DOI: | 10.48550/arxiv.2211.07521 |