EarthDial: Turning Multi-sensory Earth Observations to Interactive Dialogues

Automated analysis of vast Earth observation data via interactive Vision-Language Models (VLMs) can unlock new opportunities for environmental monitoring, disaster response, and resource management. Existing generic VLMs do not perform well on Remote Sensing data, while the recent Geo-spatial VLMs r...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Soni, Sagar, Dudhane, Akshay, Debary, Hiyam, Fiaz, Mustansar, Munir, Muhammad Akhtar, Danish, Muhammad Sohail, Fraccaro, Paolo, Watson, Campbell D, Klein, Levente J, Khan, Fahad Shahbaz, Khan, Salman
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Soni, Sagar
Dudhane, Akshay
Debary, Hiyam
Fiaz, Mustansar
Munir, Muhammad Akhtar
Danish, Muhammad Sohail
Fraccaro, Paolo
Watson, Campbell D
Klein, Levente J
Khan, Fahad Shahbaz
Khan, Salman
description Automated analysis of vast Earth observation data via interactive Vision-Language Models (VLMs) can unlock new opportunities for environmental monitoring, disaster response, and resource management. Existing generic VLMs do not perform well on Remote Sensing data, while the recent Geo-spatial VLMs remain restricted to a fixed resolution and few sensor modalities. In this paper, we introduce EarthDial, a conversational assistant specifically designed for Earth Observation (EO) data, transforming complex, multi-sensory Earth observations into interactive, natural language dialogues. EarthDial supports multi-spectral, multi-temporal, and multi-resolution imagery, enabling a wide range of remote sensing tasks, including classification, detection, captioning, question answering, visual reasoning, and visual grounding. To achieve this, we introduce an extensive instruction tuning dataset comprising over 11.11M instruction pairs covering RGB, Synthetic Aperture Radar (SAR), and multispectral modalities such as Near-Infrared (NIR) and infrared. Furthermore, EarthDial handles bi-temporal and multi-temporal sequence analysis for applications like change detection. Our extensive experimental results on 37 downstream applications demonstrate that EarthDial outperforms existing generic and domain-specific models, achieving better generalization across various EO tasks.
doi_str_mv 10.48550/arxiv.2412.15190
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2412_15190</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2412_15190</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2412_151903</originalsourceid><addsrcrecordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjE00jM0NbQ04GTwcU0sKslwyUzMsVIIKS3Ky8xLV_AtzSnJ1C1OzSvOL6pUACtQ8E8qTi0qSyzJzM8rVijJV_DMK0ktSkwuySxLVQDpzk8vTS3mYWBNS8wpTuWF0twM8m6uIc4eumB74wuKMnMTiyrjQfbHg-03JqwCABDFO3Y</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>EarthDial: Turning Multi-sensory Earth Observations to Interactive Dialogues</title><source>arXiv.org</source><creator>Soni, Sagar ; Dudhane, Akshay ; Debary, Hiyam ; Fiaz, Mustansar ; Munir, Muhammad Akhtar ; Danish, Muhammad Sohail ; Fraccaro, Paolo ; Watson, Campbell D ; Klein, Levente J ; Khan, Fahad Shahbaz ; Khan, Salman</creator><creatorcontrib>Soni, Sagar ; Dudhane, Akshay ; Debary, Hiyam ; Fiaz, Mustansar ; Munir, Muhammad Akhtar ; Danish, Muhammad Sohail ; Fraccaro, Paolo ; Watson, Campbell D ; Klein, Levente J ; Khan, Fahad Shahbaz ; Khan, Salman</creatorcontrib><description>Automated analysis of vast Earth observation data via interactive Vision-Language Models (VLMs) can unlock new opportunities for environmental monitoring, disaster response, and resource management. Existing generic VLMs do not perform well on Remote Sensing data, while the recent Geo-spatial VLMs remain restricted to a fixed resolution and few sensor modalities. In this paper, we introduce EarthDial, a conversational assistant specifically designed for Earth Observation (EO) data, transforming complex, multi-sensory Earth observations into interactive, natural language dialogues. EarthDial supports multi-spectral, multi-temporal, and multi-resolution imagery, enabling a wide range of remote sensing tasks, including classification, detection, captioning, question answering, visual reasoning, and visual grounding. To achieve this, we introduce an extensive instruction tuning dataset comprising over 11.11M instruction pairs covering RGB, Synthetic Aperture Radar (SAR), and multispectral modalities such as Near-Infrared (NIR) and infrared. Furthermore, EarthDial handles bi-temporal and multi-temporal sequence analysis for applications like change detection. Our extensive experimental results on 37 downstream applications demonstrate that EarthDial outperforms existing generic and domain-specific models, achieving better generalization across various EO tasks.</description><identifier>DOI: 10.48550/arxiv.2412.15190</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2024-12</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2412.15190$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2412.15190$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Soni, Sagar</creatorcontrib><creatorcontrib>Dudhane, Akshay</creatorcontrib><creatorcontrib>Debary, Hiyam</creatorcontrib><creatorcontrib>Fiaz, Mustansar</creatorcontrib><creatorcontrib>Munir, Muhammad Akhtar</creatorcontrib><creatorcontrib>Danish, Muhammad Sohail</creatorcontrib><creatorcontrib>Fraccaro, Paolo</creatorcontrib><creatorcontrib>Watson, Campbell D</creatorcontrib><creatorcontrib>Klein, Levente J</creatorcontrib><creatorcontrib>Khan, Fahad Shahbaz</creatorcontrib><creatorcontrib>Khan, Salman</creatorcontrib><title>EarthDial: Turning Multi-sensory Earth Observations to Interactive Dialogues</title><description>Automated analysis of vast Earth observation data via interactive Vision-Language Models (VLMs) can unlock new opportunities for environmental monitoring, disaster response, and resource management. Existing generic VLMs do not perform well on Remote Sensing data, while the recent Geo-spatial VLMs remain restricted to a fixed resolution and few sensor modalities. In this paper, we introduce EarthDial, a conversational assistant specifically designed for Earth Observation (EO) data, transforming complex, multi-sensory Earth observations into interactive, natural language dialogues. EarthDial supports multi-spectral, multi-temporal, and multi-resolution imagery, enabling a wide range of remote sensing tasks, including classification, detection, captioning, question answering, visual reasoning, and visual grounding. To achieve this, we introduce an extensive instruction tuning dataset comprising over 11.11M instruction pairs covering RGB, Synthetic Aperture Radar (SAR), and multispectral modalities such as Near-Infrared (NIR) and infrared. Furthermore, EarthDial handles bi-temporal and multi-temporal sequence analysis for applications like change detection. Our extensive experimental results on 37 downstream applications demonstrate that EarthDial outperforms existing generic and domain-specific models, achieving better generalization across various EO tasks.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjE00jM0NbQ04GTwcU0sKslwyUzMsVIIKS3Ky8xLV_AtzSnJ1C1OzSvOL6pUACtQ8E8qTi0qSyzJzM8rVijJV_DMK0ktSkwuySxLVQDpzk8vTS3mYWBNS8wpTuWF0twM8m6uIc4eumB74wuKMnMTiyrjQfbHg-03JqwCABDFO3Y</recordid><startdate>20241219</startdate><enddate>20241219</enddate><creator>Soni, Sagar</creator><creator>Dudhane, Akshay</creator><creator>Debary, Hiyam</creator><creator>Fiaz, Mustansar</creator><creator>Munir, Muhammad Akhtar</creator><creator>Danish, Muhammad Sohail</creator><creator>Fraccaro, Paolo</creator><creator>Watson, Campbell D</creator><creator>Klein, Levente J</creator><creator>Khan, Fahad Shahbaz</creator><creator>Khan, Salman</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20241219</creationdate><title>EarthDial: Turning Multi-sensory Earth Observations to Interactive Dialogues</title><author>Soni, Sagar ; Dudhane, Akshay ; Debary, Hiyam ; Fiaz, Mustansar ; Munir, Muhammad Akhtar ; Danish, Muhammad Sohail ; Fraccaro, Paolo ; Watson, Campbell D ; Klein, Levente J ; Khan, Fahad Shahbaz ; Khan, Salman</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2412_151903</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Soni, Sagar</creatorcontrib><creatorcontrib>Dudhane, Akshay</creatorcontrib><creatorcontrib>Debary, Hiyam</creatorcontrib><creatorcontrib>Fiaz, Mustansar</creatorcontrib><creatorcontrib>Munir, Muhammad Akhtar</creatorcontrib><creatorcontrib>Danish, Muhammad Sohail</creatorcontrib><creatorcontrib>Fraccaro, Paolo</creatorcontrib><creatorcontrib>Watson, Campbell D</creatorcontrib><creatorcontrib>Klein, Levente J</creatorcontrib><creatorcontrib>Khan, Fahad Shahbaz</creatorcontrib><creatorcontrib>Khan, Salman</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Soni, Sagar</au><au>Dudhane, Akshay</au><au>Debary, Hiyam</au><au>Fiaz, Mustansar</au><au>Munir, Muhammad Akhtar</au><au>Danish, Muhammad Sohail</au><au>Fraccaro, Paolo</au><au>Watson, Campbell D</au><au>Klein, Levente J</au><au>Khan, Fahad Shahbaz</au><au>Khan, Salman</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>EarthDial: Turning Multi-sensory Earth Observations to Interactive Dialogues</atitle><date>2024-12-19</date><risdate>2024</risdate><abstract>Automated analysis of vast Earth observation data via interactive Vision-Language Models (VLMs) can unlock new opportunities for environmental monitoring, disaster response, and resource management. Existing generic VLMs do not perform well on Remote Sensing data, while the recent Geo-spatial VLMs remain restricted to a fixed resolution and few sensor modalities. In this paper, we introduce EarthDial, a conversational assistant specifically designed for Earth Observation (EO) data, transforming complex, multi-sensory Earth observations into interactive, natural language dialogues. EarthDial supports multi-spectral, multi-temporal, and multi-resolution imagery, enabling a wide range of remote sensing tasks, including classification, detection, captioning, question answering, visual reasoning, and visual grounding. To achieve this, we introduce an extensive instruction tuning dataset comprising over 11.11M instruction pairs covering RGB, Synthetic Aperture Radar (SAR), and multispectral modalities such as Near-Infrared (NIR) and infrared. Furthermore, EarthDial handles bi-temporal and multi-temporal sequence analysis for applications like change detection. Our extensive experimental results on 37 downstream applications demonstrate that EarthDial outperforms existing generic and domain-specific models, achieving better generalization across various EO tasks.</abstract><doi>10.48550/arxiv.2412.15190</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2412.15190
ispartof
issn
language eng
recordid cdi_arxiv_primary_2412_15190
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
title EarthDial: Turning Multi-sensory Earth Observations to Interactive Dialogues
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-28T16%3A58%3A30IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=EarthDial:%20Turning%20Multi-sensory%20Earth%20Observations%20to%20Interactive%20Dialogues&rft.au=Soni,%20Sagar&rft.date=2024-12-19&rft_id=info:doi/10.48550/arxiv.2412.15190&rft_dat=%3Carxiv_GOX%3E2412_15190%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true