Refining multi-modal remote sensing image matching with repetitive feature optimization
•A matching refinement framework for multi-modal remote sensing image.•A detailed texture removal strategy for repeatable feature extraction.•Validated with manual checkpoints, over the state-of-the-art in accuracy.•Practical application potential with optical and SAR image rectification. Existing m...
Gespeichert in:
Veröffentlicht in: | International journal of applied earth observation and geoinformation 2024-11, Vol.134, p.104186, Article 104186 |
---|---|
Hauptverfasser: | , , , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | •A matching refinement framework for multi-modal remote sensing image.•A detailed texture removal strategy for repeatable feature extraction.•Validated with manual checkpoints, over the state-of-the-art in accuracy.•Practical application potential with optical and SAR image rectification.
Existing methods for matching multi-modal remote sensing images (MRSI) demonstrate considerable adaptability. However, high-precision matching for rectification remains challenging due to differing imaging mechanisms in cross-modal remote sensing images, leading to numerous non-repeated detailed feature points. Additionally, assuming linear transformations between images conflicts with the complex aberrations present in remote sensing images, limiting matching accuracy. This paper aims to elevate matching accuracy by implementing a detailed texture removal strategy that effectively isolates repeatable structural features. Subsequently, we construct a radiation-invariant similarity function within a generalized gradient framework for least-squares matching, specifically designed to mitigate nonlinear geometric and radiometric distortions across MRSIs. Comprehensive qualitative and quantitative evaluations across multiple datasets, employing substantial manual checkpoints, demonstrate that our method significantly enhances matching accuracy for image data involving multiple modal combinations and outperforms the current state-of-the-art solutions in matching accuracy. Additionally, rectification experiments employing WorldView and TanDEM-X images validate our method’s ability to achieve a matching accuracy of 1.05 pixels, thereby indicating its practical utility and generalization capacity. Access to experiment-related data and codes will be provided at https://github.com/LiaoYF001/refinement/. |
---|---|
ISSN: | 1569-8432 |
DOI: | 10.1016/j.jag.2024.104186 |