Inferring socioeconomic environment from built environment characteristics based street view images: An approach of Seq2Seq method
The street view image (SVI) and the point of interest (POI) are data sources with different modalities and representing different urban environments, respectively. These two types of data may not fully cover a city. Inferring from one type of data to another will enrich our means of sensing cities a...
Gespeichert in:
Veröffentlicht in: | International journal of applied earth observation and geoinformation 2023-09, Vol.123, p.103458, Article 103458 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The street view image (SVI) and the point of interest (POI) are data sources with different modalities and representing different urban environments, respectively. These two types of data may not fully cover a city. Inferring from one type of data to another will enrich our means of sensing cities and facilitate a deeper understanding of the place. Intuitively, by analyzing visual elements of the urban built environment such as buildings, roads, vehicles, and other features captured in photos of their surroundings, we can infer valuable information about the most likely POI types. Traditional approaches are mainly based on probabilistic statistical models. Such models lack a holistic understanding of the scenario and accurate modeling of the non-linear relationship between the built and socio-economic environments. In response, a Seq2Seq framework is proposed to “translate” and create a bridge between those two data. Experiments in Wuhan demonstrated the high performance of our method (accuracy = 0.770, recall = 0.773, f1= 0.771). And this work allows us to better understand the “many-to-many” relationship between the two environments (data), effectively enriching our means of perceiving cities. It also has potential applications in location awareness and visual navigation. |
---|---|
ISSN: | 1569-8432 |
DOI: | 10.1016/j.jag.2023.103458 |