Beyond Word Embeddings: Heterogeneous Prior Knowledge Driven Multi-Label Image Classification
Multi-Label Image Classification (MLIC) is a fundamental yet challenging task which aims to recognize multiple labels from given images. The key to solve MLIC lies in how to accurately model the correlation between labels. Recent studies often adopt Graph Convolutional Network (GCN) to model label d...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on multimedia 2023, Vol.25, p.4013-4025 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Multi-Label Image Classification (MLIC) is a fundamental yet challenging task which aims to recognize multiple labels from given images. The key to solve MLIC lies in how to accurately model the correlation between labels. Recent studies often adopt Graph Convolutional Network (GCN) to model label dependencies with word embeddings as prior knowledge. However, classical word embeddings typically contain redundant information due to the imperfect distributional hypothesis it relies on, which may degrade model generalizability. To tackle this problem, we propose a novel deep learning framework termed V isual- S emantic based G raph C onvolutional N etwork (VSGCN), which alleviates the negative impact of redundant information by utilizing heterogeneous sources of prior knowledge. Specifically, we construct both visual prototype and semantic prototype for each label as heterogeneous prior label representations, which are further mapped to multi-label classifiers via two Multi-Head GCNs separately. The Multi-Head GCN mechanism proposed in this paper aims to guide the information propagation between prototypes for each label, which constructs multiple correlation graphs to simultaneously model the label correlation in different subspaces. Notably, we alleviate the negative influence of needless information by decreasing the inconsistency of predictions that come from visual space and semantic space. Extensive experiments conducted on various multi-label image datasets demonstrate the superiority of our proposed method. |
---|---|
ISSN: | 1520-9210 1941-0077 |
DOI: | 10.1109/TMM.2022.3171095 |