MTP: Advancing Remote Sensing Foundation Model via Multi-Task Pretraining

Foundation models have reshaped the landscape of Remote Sensing (RS) by enhancing various image interpretation tasks. Pretraining is an active research topic, encompassing supervised and self-supervised learning methods to initialize model weights effectively. However, transferring the pretrained mo...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2024-03
Hauptverfasser: Wang, Di, Zhang, Jing, Xu, Minqiang, Liu, Lin, Wang, Dongsheng, Gao, Erzhong, Han, Chengxi, Guo, Haonan, Du, Bo, Tao, Dacheng, Zhang, Liangpei
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Wang, Di
Zhang, Jing
Xu, Minqiang
Liu, Lin
Wang, Dongsheng
Gao, Erzhong
Han, Chengxi
Guo, Haonan
Du, Bo
Tao, Dacheng
Zhang, Liangpei
description Foundation models have reshaped the landscape of Remote Sensing (RS) by enhancing various image interpretation tasks. Pretraining is an active research topic, encompassing supervised and self-supervised learning methods to initialize model weights effectively. However, transferring the pretrained models to downstream tasks may encounter task discrepancy due to their formulation of pretraining as image classification or object discrimination tasks. In this study, we explore the Multi-Task Pretraining (MTP) paradigm for RS foundation models to address this issue. Using a shared encoder and task-specific decoder architecture, we conduct multi-task supervised pretraining on the SAMRS dataset, encompassing semantic segmentation, instance segmentation, and rotated object detection. MTP supports both convolutional neural networks and vision transformer foundation models with over 300 million parameters. The pretrained models are finetuned on various RS downstream tasks, such as scene classification, horizontal and rotated object detection, semantic segmentation, and change detection. Extensive experiments across 14 datasets demonstrate the superiority of our models over existing ones of similar size and their competitive performance compared to larger state-of-the-art models, thus validating the effectiveness of MTP.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2972955141</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2972955141</sourcerecordid><originalsourceid>FETCH-proquest_journals_29729551413</originalsourceid><addsrcrecordid>eNqNis0KgkAYAJcgSMp3WOgs6LduZreIpA6CpHdZcos1-7b2x-fPoAfoNAwzMxIAY0m0TQEWJLS2j-MYNhlwzgJyLptqR_fdKPCq8E4v8qmdpLVE-9VCe-yEUxppqTs50FEJWvrBqagR9kErI50RCqd3ReY3MVgZ_rgk6-LYHE7Ry-i3l9a1vfYGp9RCnkHOeZIm7L_rAxa8O7c</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2972955141</pqid></control><display><type>article</type><title>MTP: Advancing Remote Sensing Foundation Model via Multi-Task Pretraining</title><source>Free E- Journals</source><creator>Wang, Di ; Zhang, Jing ; Xu, Minqiang ; Liu, Lin ; Wang, Dongsheng ; Gao, Erzhong ; Han, Chengxi ; Guo, Haonan ; Du, Bo ; Tao, Dacheng ; Zhang, Liangpei</creator><creatorcontrib>Wang, Di ; Zhang, Jing ; Xu, Minqiang ; Liu, Lin ; Wang, Dongsheng ; Gao, Erzhong ; Han, Chengxi ; Guo, Haonan ; Du, Bo ; Tao, Dacheng ; Zhang, Liangpei</creatorcontrib><description>Foundation models have reshaped the landscape of Remote Sensing (RS) by enhancing various image interpretation tasks. Pretraining is an active research topic, encompassing supervised and self-supervised learning methods to initialize model weights effectively. However, transferring the pretrained models to downstream tasks may encounter task discrepancy due to their formulation of pretraining as image classification or object discrimination tasks. In this study, we explore the Multi-Task Pretraining (MTP) paradigm for RS foundation models to address this issue. Using a shared encoder and task-specific decoder architecture, we conduct multi-task supervised pretraining on the SAMRS dataset, encompassing semantic segmentation, instance segmentation, and rotated object detection. MTP supports both convolutional neural networks and vision transformer foundation models with over 300 million parameters. The pretrained models are finetuned on various RS downstream tasks, such as scene classification, horizontal and rotated object detection, semantic segmentation, and change detection. Extensive experiments across 14 datasets demonstrate the superiority of our models over existing ones of similar size and their competitive performance compared to larger state-of-the-art models, thus validating the effectiveness of MTP.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Artificial neural networks ; Datasets ; Image classification ; Image enhancement ; Image segmentation ; Instance segmentation ; Machine learning ; Object recognition ; Remote sensing ; Self-supervised learning ; Semantic segmentation ; Semantics</subject><ispartof>arXiv.org, 2024-03</ispartof><rights>2024. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>780,784</link.rule.ids></links><search><creatorcontrib>Wang, Di</creatorcontrib><creatorcontrib>Zhang, Jing</creatorcontrib><creatorcontrib>Xu, Minqiang</creatorcontrib><creatorcontrib>Liu, Lin</creatorcontrib><creatorcontrib>Wang, Dongsheng</creatorcontrib><creatorcontrib>Gao, Erzhong</creatorcontrib><creatorcontrib>Han, Chengxi</creatorcontrib><creatorcontrib>Guo, Haonan</creatorcontrib><creatorcontrib>Du, Bo</creatorcontrib><creatorcontrib>Tao, Dacheng</creatorcontrib><creatorcontrib>Zhang, Liangpei</creatorcontrib><title>MTP: Advancing Remote Sensing Foundation Model via Multi-Task Pretraining</title><title>arXiv.org</title><description>Foundation models have reshaped the landscape of Remote Sensing (RS) by enhancing various image interpretation tasks. Pretraining is an active research topic, encompassing supervised and self-supervised learning methods to initialize model weights effectively. However, transferring the pretrained models to downstream tasks may encounter task discrepancy due to their formulation of pretraining as image classification or object discrimination tasks. In this study, we explore the Multi-Task Pretraining (MTP) paradigm for RS foundation models to address this issue. Using a shared encoder and task-specific decoder architecture, we conduct multi-task supervised pretraining on the SAMRS dataset, encompassing semantic segmentation, instance segmentation, and rotated object detection. MTP supports both convolutional neural networks and vision transformer foundation models with over 300 million parameters. The pretrained models are finetuned on various RS downstream tasks, such as scene classification, horizontal and rotated object detection, semantic segmentation, and change detection. Extensive experiments across 14 datasets demonstrate the superiority of our models over existing ones of similar size and their competitive performance compared to larger state-of-the-art models, thus validating the effectiveness of MTP.</description><subject>Artificial neural networks</subject><subject>Datasets</subject><subject>Image classification</subject><subject>Image enhancement</subject><subject>Image segmentation</subject><subject>Instance segmentation</subject><subject>Machine learning</subject><subject>Object recognition</subject><subject>Remote sensing</subject><subject>Self-supervised learning</subject><subject>Semantic segmentation</subject><subject>Semantics</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNis0KgkAYAJcgSMp3WOgs6LduZreIpA6CpHdZcos1-7b2x-fPoAfoNAwzMxIAY0m0TQEWJLS2j-MYNhlwzgJyLptqR_fdKPCq8E4v8qmdpLVE-9VCe-yEUxppqTs50FEJWvrBqagR9kErI50RCqd3ReY3MVgZ_rgk6-LYHE7Ry-i3l9a1vfYGp9RCnkHOeZIm7L_rAxa8O7c</recordid><startdate>20240320</startdate><enddate>20240320</enddate><creator>Wang, Di</creator><creator>Zhang, Jing</creator><creator>Xu, Minqiang</creator><creator>Liu, Lin</creator><creator>Wang, Dongsheng</creator><creator>Gao, Erzhong</creator><creator>Han, Chengxi</creator><creator>Guo, Haonan</creator><creator>Du, Bo</creator><creator>Tao, Dacheng</creator><creator>Zhang, Liangpei</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20240320</creationdate><title>MTP: Advancing Remote Sensing Foundation Model via Multi-Task Pretraining</title><author>Wang, Di ; Zhang, Jing ; Xu, Minqiang ; Liu, Lin ; Wang, Dongsheng ; Gao, Erzhong ; Han, Chengxi ; Guo, Haonan ; Du, Bo ; Tao, Dacheng ; Zhang, Liangpei</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_29729551413</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Artificial neural networks</topic><topic>Datasets</topic><topic>Image classification</topic><topic>Image enhancement</topic><topic>Image segmentation</topic><topic>Instance segmentation</topic><topic>Machine learning</topic><topic>Object recognition</topic><topic>Remote sensing</topic><topic>Self-supervised learning</topic><topic>Semantic segmentation</topic><topic>Semantics</topic><toplevel>online_resources</toplevel><creatorcontrib>Wang, Di</creatorcontrib><creatorcontrib>Zhang, Jing</creatorcontrib><creatorcontrib>Xu, Minqiang</creatorcontrib><creatorcontrib>Liu, Lin</creatorcontrib><creatorcontrib>Wang, Dongsheng</creatorcontrib><creatorcontrib>Gao, Erzhong</creatorcontrib><creatorcontrib>Han, Chengxi</creatorcontrib><creatorcontrib>Guo, Haonan</creatorcontrib><creatorcontrib>Du, Bo</creatorcontrib><creatorcontrib>Tao, Dacheng</creatorcontrib><creatorcontrib>Zhang, Liangpei</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Wang, Di</au><au>Zhang, Jing</au><au>Xu, Minqiang</au><au>Liu, Lin</au><au>Wang, Dongsheng</au><au>Gao, Erzhong</au><au>Han, Chengxi</au><au>Guo, Haonan</au><au>Du, Bo</au><au>Tao, Dacheng</au><au>Zhang, Liangpei</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>MTP: Advancing Remote Sensing Foundation Model via Multi-Task Pretraining</atitle><jtitle>arXiv.org</jtitle><date>2024-03-20</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>Foundation models have reshaped the landscape of Remote Sensing (RS) by enhancing various image interpretation tasks. Pretraining is an active research topic, encompassing supervised and self-supervised learning methods to initialize model weights effectively. However, transferring the pretrained models to downstream tasks may encounter task discrepancy due to their formulation of pretraining as image classification or object discrimination tasks. In this study, we explore the Multi-Task Pretraining (MTP) paradigm for RS foundation models to address this issue. Using a shared encoder and task-specific decoder architecture, we conduct multi-task supervised pretraining on the SAMRS dataset, encompassing semantic segmentation, instance segmentation, and rotated object detection. MTP supports both convolutional neural networks and vision transformer foundation models with over 300 million parameters. The pretrained models are finetuned on various RS downstream tasks, such as scene classification, horizontal and rotated object detection, semantic segmentation, and change detection. Extensive experiments across 14 datasets demonstrate the superiority of our models over existing ones of similar size and their competitive performance compared to larger state-of-the-art models, thus validating the effectiveness of MTP.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2024-03
issn 2331-8422
language eng
recordid cdi_proquest_journals_2972955141
source Free E- Journals
subjects Artificial neural networks
Datasets
Image classification
Image enhancement
Image segmentation
Instance segmentation
Machine learning
Object recognition
Remote sensing
Self-supervised learning
Semantic segmentation
Semantics
title MTP: Advancing Remote Sensing Foundation Model via Multi-Task Pretraining
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-25T15%3A08%3A23IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=MTP:%20Advancing%20Remote%20Sensing%20Foundation%20Model%20via%20Multi-Task%20Pretraining&rft.jtitle=arXiv.org&rft.au=Wang,%20Di&rft.date=2024-03-20&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2972955141%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2972955141&rft_id=info:pmid/&rfr_iscdi=true