TorchSpatial: A Location Encoding Framework and Benchmark for Spatial Representation Learning

Spatial representation learning (SRL) aims at learning general-purpose neural network representations from various types of spatial data (e.g., points, polylines, polygons, networks, images, etc.) in their native formats. Learning good spatial representations is a fundamental problem for various dow...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Wu, Nemin, Cao, Qian, Wang, Zhangyu, Liu, Zeping, Qi, Yanlin, Zhang, Jielu, Ni, Joshua, Yao, Xiaobai, Ma, Hongxu, Mu, Lan, Ermon, Stefano, Ganu, Tanuja, Nambi, Akshay, Lao, Ni, Mai, Gengchen
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Wu, Nemin
Cao, Qian
Wang, Zhangyu
Liu, Zeping
Qi, Yanlin
Zhang, Jielu
Ni, Joshua
Yao, Xiaobai
Ma, Hongxu
Mu, Lan
Ermon, Stefano
Ganu, Tanuja
Nambi, Akshay
Lao, Ni
Mai, Gengchen
description Spatial representation learning (SRL) aims at learning general-purpose neural network representations from various types of spatial data (e.g., points, polylines, polygons, networks, images, etc.) in their native formats. Learning good spatial representations is a fundamental problem for various downstream applications such as species distribution modeling, weather forecasting, trajectory generation, geographic question answering, etc. Even though SRL has become the foundation of almost all geospatial artificial intelligence (GeoAI) research, we have not yet seen significant efforts to develop an extensive deep learning framework and benchmark to support SRL model development and evaluation. To fill this gap, we propose TorchSpatial, a learning framework and benchmark for location (point) encoding, which is one of the most fundamental data types of spatial representation learning. TorchSpatial contains three key components: 1) a unified location encoding framework that consolidates 15 commonly recognized location encoders, ensuring scalability and reproducibility of the implementations; 2) the LocBench benchmark tasks encompassing 7 geo-aware image classification and 10 geo-aware image regression datasets; 3) a comprehensive suite of evaluation metrics to quantify geo-aware models' overall performance as well as their geographic bias, with a novel Geo-Bias Score metric. Finally, we provide a detailed analysis and insights into the model performance and geographic bias of different location encoders. We believe TorchSpatial will foster future advancement of spatial representation learning and spatial fairness in GeoAI research. The TorchSpatial model framework, LocBench, and Geo-Bias Score evaluation framework are available at https://github.com/seai-lab/TorchSpatial.
doi_str_mv 10.48550/arxiv.2406.15658
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2406_15658</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2406_15658</sourcerecordid><originalsourceid>FETCH-LOGICAL-a678-fb70caac15d3932e67c41fa7e32343a256b26183dc3d8d0436317827946b16e63</originalsourceid><addsrcrecordid>eNotj8tOwzAQRb1hgQofwAr_QELssccuu1K1gBQJCbJF0cR2aERrR27F4-8JbVdzZ3Gu7mHsRlSlslpXd5R_hq9SqgpLoVHbS_bepOw2byMdBtre8wWvk5tyinwVXfJD_ODrTLvwnfInp-j5Q4hus6Pp61PmZ5C_hjGHfYiHE1sHynFir9hFT9t9uD7fGWvWq2b5VNQvj8_LRV0QGlv0nakckRPawxxkQOOU6MkEkKCApMZOorDgHXjrKwUIwlhp5go7gQFhxm5PtUe_dszDNPC3_fdsj57wB57gTZk</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>TorchSpatial: A Location Encoding Framework and Benchmark for Spatial Representation Learning</title><source>arXiv.org</source><creator>Wu, Nemin ; Cao, Qian ; Wang, Zhangyu ; Liu, Zeping ; Qi, Yanlin ; Zhang, Jielu ; Ni, Joshua ; Yao, Xiaobai ; Ma, Hongxu ; Mu, Lan ; Ermon, Stefano ; Ganu, Tanuja ; Nambi, Akshay ; Lao, Ni ; Mai, Gengchen</creator><creatorcontrib>Wu, Nemin ; Cao, Qian ; Wang, Zhangyu ; Liu, Zeping ; Qi, Yanlin ; Zhang, Jielu ; Ni, Joshua ; Yao, Xiaobai ; Ma, Hongxu ; Mu, Lan ; Ermon, Stefano ; Ganu, Tanuja ; Nambi, Akshay ; Lao, Ni ; Mai, Gengchen</creatorcontrib><description>Spatial representation learning (SRL) aims at learning general-purpose neural network representations from various types of spatial data (e.g., points, polylines, polygons, networks, images, etc.) in their native formats. Learning good spatial representations is a fundamental problem for various downstream applications such as species distribution modeling, weather forecasting, trajectory generation, geographic question answering, etc. Even though SRL has become the foundation of almost all geospatial artificial intelligence (GeoAI) research, we have not yet seen significant efforts to develop an extensive deep learning framework and benchmark to support SRL model development and evaluation. To fill this gap, we propose TorchSpatial, a learning framework and benchmark for location (point) encoding, which is one of the most fundamental data types of spatial representation learning. TorchSpatial contains three key components: 1) a unified location encoding framework that consolidates 15 commonly recognized location encoders, ensuring scalability and reproducibility of the implementations; 2) the LocBench benchmark tasks encompassing 7 geo-aware image classification and 10 geo-aware image regression datasets; 3) a comprehensive suite of evaluation metrics to quantify geo-aware models' overall performance as well as their geographic bias, with a novel Geo-Bias Score metric. Finally, we provide a detailed analysis and insights into the model performance and geographic bias of different location encoders. We believe TorchSpatial will foster future advancement of spatial representation learning and spatial fairness in GeoAI research. The TorchSpatial model framework, LocBench, and Geo-Bias Score evaluation framework are available at https://github.com/seai-lab/TorchSpatial.</description><identifier>DOI: 10.48550/arxiv.2406.15658</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2024-06</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2406.15658$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2406.15658$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Wu, Nemin</creatorcontrib><creatorcontrib>Cao, Qian</creatorcontrib><creatorcontrib>Wang, Zhangyu</creatorcontrib><creatorcontrib>Liu, Zeping</creatorcontrib><creatorcontrib>Qi, Yanlin</creatorcontrib><creatorcontrib>Zhang, Jielu</creatorcontrib><creatorcontrib>Ni, Joshua</creatorcontrib><creatorcontrib>Yao, Xiaobai</creatorcontrib><creatorcontrib>Ma, Hongxu</creatorcontrib><creatorcontrib>Mu, Lan</creatorcontrib><creatorcontrib>Ermon, Stefano</creatorcontrib><creatorcontrib>Ganu, Tanuja</creatorcontrib><creatorcontrib>Nambi, Akshay</creatorcontrib><creatorcontrib>Lao, Ni</creatorcontrib><creatorcontrib>Mai, Gengchen</creatorcontrib><title>TorchSpatial: A Location Encoding Framework and Benchmark for Spatial Representation Learning</title><description>Spatial representation learning (SRL) aims at learning general-purpose neural network representations from various types of spatial data (e.g., points, polylines, polygons, networks, images, etc.) in their native formats. Learning good spatial representations is a fundamental problem for various downstream applications such as species distribution modeling, weather forecasting, trajectory generation, geographic question answering, etc. Even though SRL has become the foundation of almost all geospatial artificial intelligence (GeoAI) research, we have not yet seen significant efforts to develop an extensive deep learning framework and benchmark to support SRL model development and evaluation. To fill this gap, we propose TorchSpatial, a learning framework and benchmark for location (point) encoding, which is one of the most fundamental data types of spatial representation learning. TorchSpatial contains three key components: 1) a unified location encoding framework that consolidates 15 commonly recognized location encoders, ensuring scalability and reproducibility of the implementations; 2) the LocBench benchmark tasks encompassing 7 geo-aware image classification and 10 geo-aware image regression datasets; 3) a comprehensive suite of evaluation metrics to quantify geo-aware models' overall performance as well as their geographic bias, with a novel Geo-Bias Score metric. Finally, we provide a detailed analysis and insights into the model performance and geographic bias of different location encoders. We believe TorchSpatial will foster future advancement of spatial representation learning and spatial fairness in GeoAI research. The TorchSpatial model framework, LocBench, and Geo-Bias Score evaluation framework are available at https://github.com/seai-lab/TorchSpatial.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8tOwzAQRb1hgQofwAr_QELssccuu1K1gBQJCbJF0cR2aERrR27F4-8JbVdzZ3Gu7mHsRlSlslpXd5R_hq9SqgpLoVHbS_bepOw2byMdBtre8wWvk5tyinwVXfJD_ODrTLvwnfInp-j5Q4hus6Pp61PmZ5C_hjGHfYiHE1sHynFir9hFT9t9uD7fGWvWq2b5VNQvj8_LRV0QGlv0nakckRPawxxkQOOU6MkEkKCApMZOorDgHXjrKwUIwlhp5go7gQFhxm5PtUe_dszDNPC3_fdsj57wB57gTZk</recordid><startdate>20240621</startdate><enddate>20240621</enddate><creator>Wu, Nemin</creator><creator>Cao, Qian</creator><creator>Wang, Zhangyu</creator><creator>Liu, Zeping</creator><creator>Qi, Yanlin</creator><creator>Zhang, Jielu</creator><creator>Ni, Joshua</creator><creator>Yao, Xiaobai</creator><creator>Ma, Hongxu</creator><creator>Mu, Lan</creator><creator>Ermon, Stefano</creator><creator>Ganu, Tanuja</creator><creator>Nambi, Akshay</creator><creator>Lao, Ni</creator><creator>Mai, Gengchen</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240621</creationdate><title>TorchSpatial: A Location Encoding Framework and Benchmark for Spatial Representation Learning</title><author>Wu, Nemin ; Cao, Qian ; Wang, Zhangyu ; Liu, Zeping ; Qi, Yanlin ; Zhang, Jielu ; Ni, Joshua ; Yao, Xiaobai ; Ma, Hongxu ; Mu, Lan ; Ermon, Stefano ; Ganu, Tanuja ; Nambi, Akshay ; Lao, Ni ; Mai, Gengchen</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a678-fb70caac15d3932e67c41fa7e32343a256b26183dc3d8d0436317827946b16e63</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Wu, Nemin</creatorcontrib><creatorcontrib>Cao, Qian</creatorcontrib><creatorcontrib>Wang, Zhangyu</creatorcontrib><creatorcontrib>Liu, Zeping</creatorcontrib><creatorcontrib>Qi, Yanlin</creatorcontrib><creatorcontrib>Zhang, Jielu</creatorcontrib><creatorcontrib>Ni, Joshua</creatorcontrib><creatorcontrib>Yao, Xiaobai</creatorcontrib><creatorcontrib>Ma, Hongxu</creatorcontrib><creatorcontrib>Mu, Lan</creatorcontrib><creatorcontrib>Ermon, Stefano</creatorcontrib><creatorcontrib>Ganu, Tanuja</creatorcontrib><creatorcontrib>Nambi, Akshay</creatorcontrib><creatorcontrib>Lao, Ni</creatorcontrib><creatorcontrib>Mai, Gengchen</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Wu, Nemin</au><au>Cao, Qian</au><au>Wang, Zhangyu</au><au>Liu, Zeping</au><au>Qi, Yanlin</au><au>Zhang, Jielu</au><au>Ni, Joshua</au><au>Yao, Xiaobai</au><au>Ma, Hongxu</au><au>Mu, Lan</au><au>Ermon, Stefano</au><au>Ganu, Tanuja</au><au>Nambi, Akshay</au><au>Lao, Ni</au><au>Mai, Gengchen</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>TorchSpatial: A Location Encoding Framework and Benchmark for Spatial Representation Learning</atitle><date>2024-06-21</date><risdate>2024</risdate><abstract>Spatial representation learning (SRL) aims at learning general-purpose neural network representations from various types of spatial data (e.g., points, polylines, polygons, networks, images, etc.) in their native formats. Learning good spatial representations is a fundamental problem for various downstream applications such as species distribution modeling, weather forecasting, trajectory generation, geographic question answering, etc. Even though SRL has become the foundation of almost all geospatial artificial intelligence (GeoAI) research, we have not yet seen significant efforts to develop an extensive deep learning framework and benchmark to support SRL model development and evaluation. To fill this gap, we propose TorchSpatial, a learning framework and benchmark for location (point) encoding, which is one of the most fundamental data types of spatial representation learning. TorchSpatial contains three key components: 1) a unified location encoding framework that consolidates 15 commonly recognized location encoders, ensuring scalability and reproducibility of the implementations; 2) the LocBench benchmark tasks encompassing 7 geo-aware image classification and 10 geo-aware image regression datasets; 3) a comprehensive suite of evaluation metrics to quantify geo-aware models' overall performance as well as their geographic bias, with a novel Geo-Bias Score metric. Finally, we provide a detailed analysis and insights into the model performance and geographic bias of different location encoders. We believe TorchSpatial will foster future advancement of spatial representation learning and spatial fairness in GeoAI research. The TorchSpatial model framework, LocBench, and Geo-Bias Score evaluation framework are available at https://github.com/seai-lab/TorchSpatial.</abstract><doi>10.48550/arxiv.2406.15658</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2406.15658
ispartof
issn
language eng
recordid cdi_arxiv_primary_2406_15658
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Computer Vision and Pattern Recognition
title TorchSpatial: A Location Encoding Framework and Benchmark for Spatial Representation Learning
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-19T05%3A10%3A40IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=TorchSpatial:%20A%20Location%20Encoding%20Framework%20and%20Benchmark%20for%20Spatial%20Representation%20Learning&rft.au=Wu,%20Nemin&rft.date=2024-06-21&rft_id=info:doi/10.48550/arxiv.2406.15658&rft_dat=%3Carxiv_GOX%3E2406_15658%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true