Sentinel-2A Image Fusion Using a Machine Learning Approach

The multispectral instrument (MSI) carried by Sentinel-2A has 13 spectral bands with various spatial resolutions (i.e., four 10-m, six 20-m, and three 60-m bands). A wide range of applications requires a 10-m resolution for all spectral bands, including the 20- and 60-m bands. To achieve this requir...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on geoscience and remote sensing 2019-12, Vol.57 (12), p.9589-9601
Hauptverfasser: Wang, Jing, Huang, Bo, Zhang, Hankui K., Ma, Peifeng
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The multispectral instrument (MSI) carried by Sentinel-2A has 13 spectral bands with various spatial resolutions (i.e., four 10-m, six 20-m, and three 60-m bands). A wide range of applications requires a 10-m resolution for all spectral bands, including the 20- and 60-m bands. To achieve this requirement, previous studies used conventional pansharpening techniques, which require a simulated 10-m panchromatic (PAN) band from four 10-m bands [blue, green, red, and near infrared (NIR)]. The simulated PAN band may not have all the information from the original four bands and may have no spectral response function that overlaps the 20- or 60-m bands to be sharpened, which may degrade fusion quality. This paper presents a machine learning method that can directly use the information from multiple 10-m resolution bands for fusion. The method first learns the spectral relationship between the 20- or 60-m band to be sharpened and the selected 10-m bands degraded to 20 or 60 m using the support vector regression (SVR) model. The model is then applied to the selected 10-m bands to predict the 10-m-resolution version of the 20- or 60-m band. The image degradation process was tuned to closely match the Sentinel-2A MSI modulation transfer function (MTF). We applied our method to three data sets in Guangzhou, China, New South Wales, Australia, and St. Louis, USA, and achieved better fusion results than other commonly used pansharpening methods in terms of both visual and quantitative factors.
ISSN:0196-2892
1558-0644
DOI:10.1109/TGRS.2019.2927766