Representation-Enhanced Status Replay Network for Multisource Remote-Sensing Image Classification
Deep-learning-based methods are widely used in multisource remote-sensing image classification, and the improvement in their performance confirms the effectiveness of deep learning for classification tasks. However, the inherent underlying problems of deep-learning models still hinder the further im...
Gespeichert in:
Veröffentlicht in: | IEEE transaction on neural networks and learning systems 2024-11, Vol.35 (11), p.15346-15358 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Deep-learning-based methods are widely used in multisource remote-sensing image classification, and the improvement in their performance confirms the effectiveness of deep learning for classification tasks. However, the inherent underlying problems of deep-learning models still hinder the further improvement of classification accuracy. For example, after multiple rounds of optimization learning, representation bias and classifier bias are accumulated, which prevents the further optimization of network performance. In addition, the imbalance of fusion information among multisource images also leads to insufficient information interaction throughout the fusion process, thus making it difficult to fully utilize the complementary information of multisource data. To address these issues, a Representation-enhanced Status Replay Network (RSRNet) is proposed. First, a dual augmentation including modal augmentation and semantic augmentation is proposed to enhance the transferability and discreteness of feature representation, to reduce the impact of representation bias in the feature extractor. Then, to alleviate the classifier bias and maintain the stability of the decision boundary, a status replay strategy (SRS) is built to regulate the learning and optimization of the classifier. Finally, aiming to improve the interactivity of modal fusion, a novel cross-modal interactive fusion (CMIF) method is employed to jointly optimize the parameters of different branches by combining multisource information. Quantitative and qualitative results on three datasets demonstrate the superiority of RSRNet in multisource remote-sensing image classification, and its outperformance compared with other state-of-the-art methods. |
---|---|
ISSN: | 2162-237X 2162-2388 2162-2388 |
DOI: | 10.1109/TNNLS.2023.3286422 |