MTP: Advancing Remote Sensing Foundation Model via Multi-Task Pretraining
Foundation models have reshaped the landscape of Remote Sensing (RS) by enhancing various image interpretation tasks. Pretraining is an active research topic, encompassing supervised and self-supervised learning methods to initialize model weights effectively. However, transferring the pretrained mo...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Foundation models have reshaped the landscape of Remote Sensing (RS) by
enhancing various image interpretation tasks. Pretraining is an active research
topic, encompassing supervised and self-supervised learning methods to
initialize model weights effectively. However, transferring the pretrained
models to downstream tasks may encounter task discrepancy due to their
formulation of pretraining as image classification or object discrimination
tasks. In this study, we explore the Multi-Task Pretraining (MTP) paradigm for
RS foundation models to address this issue. Using a shared encoder and
task-specific decoder architecture, we conduct multi-task supervised
pretraining on the SAMRS dataset, encompassing semantic segmentation, instance
segmentation, and rotated object detection. MTP supports both convolutional
neural networks and vision transformer foundation models with over 300 million
parameters. The pretrained models are finetuned on various RS downstream tasks,
such as scene classification, horizontal and rotated object detection, semantic
segmentation, and change detection. Extensive experiments across 14 datasets
demonstrate the superiority of our models over existing ones of similar size
and their competitive performance compared to larger state-of-the-art models,
thus validating the effectiveness of MTP. |
---|---|
DOI: | 10.48550/arxiv.2403.13430 |