Bounding Boxes Are All We Need: Street View Image Classification via Context Encoding of Detected Buildings

Street view image classification aiming at the urban land use analysis is difficult because the class labels (e.g., commercial area) are concepts with higher abstract levels compared to the ones of general visual tasks (e.g., persons and cars). Therefore, classification models using only visual feat...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on geoscience and remote sensing 2022, Vol.60, p.1-17
Hauptverfasser: Zhao, Kun, Liu, Yongkun, Hao, Siyuan, Lu, Shaoxing, Liu, Hongbin, Zhou, Lijian
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 17
container_issue
container_start_page 1
container_title IEEE transactions on geoscience and remote sensing
container_volume 60
creator Zhao, Kun
Liu, Yongkun
Hao, Siyuan
Lu, Shaoxing
Liu, Hongbin
Zhou, Lijian
description Street view image classification aiming at the urban land use analysis is difficult because the class labels (e.g., commercial area) are concepts with higher abstract levels compared to the ones of general visual tasks (e.g., persons and cars). Therefore, classification models using only visual features often fail to achieve satisfactory performance. In this article, a novel approach based on a "bottom-up and top-down" framework is proposed. Instead of using visual features of the whole image directly as common image-level models based on convolutional neural networks (CNNs) do, the proposed framework first obtains low-level semantic, namely, the bounding boxes of buildings in street view images through a bottom-up object discovery process. Their contextual information, such as the co-occurrence patterns of building classes and their layout, is then encoded into metadata by the proposed algorithm "Context encOding of Detected buildINGs" (CODING). Finally, these metadata (low-level semantic encoded with context information) are abstracted to high-level semantic, namely, the land use label of the street view image through a top-down semantic aggregation process implemented by a recurrent neural network (RNN). In addition, in order to effectively discover low-level semantic as the bridge between visual features and higher abstract concepts, we made a dual-labeled data set named "Building dEtection And Urban funcTional-zone portraYing" (BEAUTY) of 19070 street view images and 38857 buildings based on the existing BIC_GSV. The data set can be used not only for street view image classification but also for multiclass building detection. Experiments on "BEAUTY" show that the proposed approach achieves a 12.65% performance improvement on macroprecision and 12% on macrorecall over image-level CNN-based models. Our code and data set are available at https://github.com/kyle-one/Context-Encoding-of-Detected-Buildings/ .
doi_str_mv 10.1109/TGRS.2021.3064316
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_ieee_primary_9380541</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9380541</ieee_id><sourcerecordid>2607877792</sourcerecordid><originalsourceid>FETCH-LOGICAL-c293t-85fab0a21d0ecc18bd124747c37cebf996bb1a51fa181c38abfdc2065fdf89a83</originalsourceid><addsrcrecordid>eNo9kE1PAjEQhhujiYj-AOOliefFTver9QaISEI0EdTjptudkuKy1W1R_PeCEE-TmTzvO8lDyCWwHgCTN_Px86zHGYdezLIkhuyIdCBNRbTdkmPSYSCziAvJT8mZ90vGIEkh75D3gVs3lW0WdOA26Gm_Rdqva_qG9BGxuqWz0CIG-mrxm05WaoF0WCvvrbFaBesa-mUVHbom4CbQUaPdX5kz9A4D6oAVHaxtvTv6c3JiVO3x4jC75OV-NB8-RNOn8WTYn0aayzhEIjWqZIpDxVBrEGUFPMmTXMe5xtJImZUlqBSMAgE6Fqo0leYsS01lhFQi7pLrfe9H6z7X6EOxdOu22b4seMZykee55FsK9pRunfctmuKjtSvV_hTAip3TYue02DktDk63mat9xiLiPy9jwdIE4l_Pe3MY</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2607877792</pqid></control><display><type>article</type><title>Bounding Boxes Are All We Need: Street View Image Classification via Context Encoding of Detected Buildings</title><source>IEEE Electronic Library (IEL)</source><creator>Zhao, Kun ; Liu, Yongkun ; Hao, Siyuan ; Lu, Shaoxing ; Liu, Hongbin ; Zhou, Lijian</creator><creatorcontrib>Zhao, Kun ; Liu, Yongkun ; Hao, Siyuan ; Lu, Shaoxing ; Liu, Hongbin ; Zhou, Lijian</creatorcontrib><description>Street view image classification aiming at the urban land use analysis is difficult because the class labels (e.g., commercial area) are concepts with higher abstract levels compared to the ones of general visual tasks (e.g., persons and cars). Therefore, classification models using only visual features often fail to achieve satisfactory performance. In this article, a novel approach based on a "bottom-up and top-down" framework is proposed. Instead of using visual features of the whole image directly as common image-level models based on convolutional neural networks (CNNs) do, the proposed framework first obtains low-level semantic, namely, the bounding boxes of buildings in street view images through a bottom-up object discovery process. Their contextual information, such as the co-occurrence patterns of building classes and their layout, is then encoded into metadata by the proposed algorithm "Context encOding of Detected buildINGs" (CODING). Finally, these metadata (low-level semantic encoded with context information) are abstracted to high-level semantic, namely, the land use label of the street view image through a top-down semantic aggregation process implemented by a recurrent neural network (RNN). In addition, in order to effectively discover low-level semantic as the bridge between visual features and higher abstract concepts, we made a dual-labeled data set named "Building dEtection And Urban funcTional-zone portraYing" (BEAUTY) of 19070 street view images and 38857 buildings based on the existing BIC_GSV. The data set can be used not only for street view image classification but also for multiclass building detection. Experiments on "BEAUTY" show that the proposed approach achieves a 12.65% performance improvement on macroprecision and 12% on macrorecall over image-level CNN-based models. Our code and data set are available at https://github.com/kyle-one/Context-Encoding-of-Detected-Buildings/ .</description><identifier>ISSN: 0196-2892</identifier><identifier>EISSN: 1558-0644</identifier><identifier>DOI: 10.1109/TGRS.2021.3064316</identifier><identifier>CODEN: IGRSD2</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>Aggregation ; Algorithms ; Artificial neural networks ; Boxes ; Building detection ; Buildings ; Classification ; Coding ; Context ; context encoding ; Datasets ; Detection ; Feature extraction ; Image classification ; Image coding ; Information processing ; Land use ; Metadata ; Neural networks ; recurrent neural network (RNN) ; Recurrent neural networks ; Semantics ; street view images classification ; Task analysis ; Urban areas ; urban functional zone ; urban land use classification ; Visual tasks ; Visualization</subject><ispartof>IEEE transactions on geoscience and remote sensing, 2022, Vol.60, p.1-17</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c293t-85fab0a21d0ecc18bd124747c37cebf996bb1a51fa181c38abfdc2065fdf89a83</citedby><cites>FETCH-LOGICAL-c293t-85fab0a21d0ecc18bd124747c37cebf996bb1a51fa181c38abfdc2065fdf89a83</cites><orcidid>0000-0001-9949-4693 ; 0000-0001-8247-4207</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9380541$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,776,780,792,4009,27902,27903,27904,54737</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9380541$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Zhao, Kun</creatorcontrib><creatorcontrib>Liu, Yongkun</creatorcontrib><creatorcontrib>Hao, Siyuan</creatorcontrib><creatorcontrib>Lu, Shaoxing</creatorcontrib><creatorcontrib>Liu, Hongbin</creatorcontrib><creatorcontrib>Zhou, Lijian</creatorcontrib><title>Bounding Boxes Are All We Need: Street View Image Classification via Context Encoding of Detected Buildings</title><title>IEEE transactions on geoscience and remote sensing</title><addtitle>TGRS</addtitle><description>Street view image classification aiming at the urban land use analysis is difficult because the class labels (e.g., commercial area) are concepts with higher abstract levels compared to the ones of general visual tasks (e.g., persons and cars). Therefore, classification models using only visual features often fail to achieve satisfactory performance. In this article, a novel approach based on a "bottom-up and top-down" framework is proposed. Instead of using visual features of the whole image directly as common image-level models based on convolutional neural networks (CNNs) do, the proposed framework first obtains low-level semantic, namely, the bounding boxes of buildings in street view images through a bottom-up object discovery process. Their contextual information, such as the co-occurrence patterns of building classes and their layout, is then encoded into metadata by the proposed algorithm "Context encOding of Detected buildINGs" (CODING). Finally, these metadata (low-level semantic encoded with context information) are abstracted to high-level semantic, namely, the land use label of the street view image through a top-down semantic aggregation process implemented by a recurrent neural network (RNN). In addition, in order to effectively discover low-level semantic as the bridge between visual features and higher abstract concepts, we made a dual-labeled data set named "Building dEtection And Urban funcTional-zone portraYing" (BEAUTY) of 19070 street view images and 38857 buildings based on the existing BIC_GSV. The data set can be used not only for street view image classification but also for multiclass building detection. Experiments on "BEAUTY" show that the proposed approach achieves a 12.65% performance improvement on macroprecision and 12% on macrorecall over image-level CNN-based models. Our code and data set are available at https://github.com/kyle-one/Context-Encoding-of-Detected-Buildings/ .</description><subject>Aggregation</subject><subject>Algorithms</subject><subject>Artificial neural networks</subject><subject>Boxes</subject><subject>Building detection</subject><subject>Buildings</subject><subject>Classification</subject><subject>Coding</subject><subject>Context</subject><subject>context encoding</subject><subject>Datasets</subject><subject>Detection</subject><subject>Feature extraction</subject><subject>Image classification</subject><subject>Image coding</subject><subject>Information processing</subject><subject>Land use</subject><subject>Metadata</subject><subject>Neural networks</subject><subject>recurrent neural network (RNN)</subject><subject>Recurrent neural networks</subject><subject>Semantics</subject><subject>street view images classification</subject><subject>Task analysis</subject><subject>Urban areas</subject><subject>urban functional zone</subject><subject>urban land use classification</subject><subject>Visual tasks</subject><subject>Visualization</subject><issn>0196-2892</issn><issn>1558-0644</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNo9kE1PAjEQhhujiYj-AOOliefFTver9QaISEI0EdTjptudkuKy1W1R_PeCEE-TmTzvO8lDyCWwHgCTN_Px86zHGYdezLIkhuyIdCBNRbTdkmPSYSCziAvJT8mZ90vGIEkh75D3gVs3lW0WdOA26Gm_Rdqva_qG9BGxuqWz0CIG-mrxm05WaoF0WCvvrbFaBesa-mUVHbom4CbQUaPdX5kz9A4D6oAVHaxtvTv6c3JiVO3x4jC75OV-NB8-RNOn8WTYn0aayzhEIjWqZIpDxVBrEGUFPMmTXMe5xtJImZUlqBSMAgE6Fqo0leYsS01lhFQi7pLrfe9H6z7X6EOxdOu22b4seMZykee55FsK9pRunfctmuKjtSvV_hTAip3TYue02DktDk63mat9xiLiPy9jwdIE4l_Pe3MY</recordid><startdate>2022</startdate><enddate>2022</enddate><creator>Zhao, Kun</creator><creator>Liu, Yongkun</creator><creator>Hao, Siyuan</creator><creator>Lu, Shaoxing</creator><creator>Liu, Hongbin</creator><creator>Zhou, Lijian</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7UA</scope><scope>8FD</scope><scope>C1K</scope><scope>F1W</scope><scope>FR3</scope><scope>H8D</scope><scope>H96</scope><scope>KR7</scope><scope>L.G</scope><scope>L7M</scope><orcidid>https://orcid.org/0000-0001-9949-4693</orcidid><orcidid>https://orcid.org/0000-0001-8247-4207</orcidid></search><sort><creationdate>2022</creationdate><title>Bounding Boxes Are All We Need: Street View Image Classification via Context Encoding of Detected Buildings</title><author>Zhao, Kun ; Liu, Yongkun ; Hao, Siyuan ; Lu, Shaoxing ; Liu, Hongbin ; Zhou, Lijian</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c293t-85fab0a21d0ecc18bd124747c37cebf996bb1a51fa181c38abfdc2065fdf89a83</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Aggregation</topic><topic>Algorithms</topic><topic>Artificial neural networks</topic><topic>Boxes</topic><topic>Building detection</topic><topic>Buildings</topic><topic>Classification</topic><topic>Coding</topic><topic>Context</topic><topic>context encoding</topic><topic>Datasets</topic><topic>Detection</topic><topic>Feature extraction</topic><topic>Image classification</topic><topic>Image coding</topic><topic>Information processing</topic><topic>Land use</topic><topic>Metadata</topic><topic>Neural networks</topic><topic>recurrent neural network (RNN)</topic><topic>Recurrent neural networks</topic><topic>Semantics</topic><topic>street view images classification</topic><topic>Task analysis</topic><topic>Urban areas</topic><topic>urban functional zone</topic><topic>urban land use classification</topic><topic>Visual tasks</topic><topic>Visualization</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Zhao, Kun</creatorcontrib><creatorcontrib>Liu, Yongkun</creatorcontrib><creatorcontrib>Hao, Siyuan</creatorcontrib><creatorcontrib>Lu, Shaoxing</creatorcontrib><creatorcontrib>Liu, Hongbin</creatorcontrib><creatorcontrib>Zhou, Lijian</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Water Resources Abstracts</collection><collection>Technology Research Database</collection><collection>Environmental Sciences and Pollution Management</collection><collection>ASFA: Aquatic Sciences and Fisheries Abstracts</collection><collection>Engineering Research Database</collection><collection>Aerospace Database</collection><collection>Aquatic Science &amp; Fisheries Abstracts (ASFA) 2: Ocean Technology, Policy &amp; Non-Living Resources</collection><collection>Civil Engineering Abstracts</collection><collection>Aquatic Science &amp; Fisheries Abstracts (ASFA) Professional</collection><collection>Advanced Technologies Database with Aerospace</collection><jtitle>IEEE transactions on geoscience and remote sensing</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Zhao, Kun</au><au>Liu, Yongkun</au><au>Hao, Siyuan</au><au>Lu, Shaoxing</au><au>Liu, Hongbin</au><au>Zhou, Lijian</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Bounding Boxes Are All We Need: Street View Image Classification via Context Encoding of Detected Buildings</atitle><jtitle>IEEE transactions on geoscience and remote sensing</jtitle><stitle>TGRS</stitle><date>2022</date><risdate>2022</risdate><volume>60</volume><spage>1</spage><epage>17</epage><pages>1-17</pages><issn>0196-2892</issn><eissn>1558-0644</eissn><coden>IGRSD2</coden><abstract>Street view image classification aiming at the urban land use analysis is difficult because the class labels (e.g., commercial area) are concepts with higher abstract levels compared to the ones of general visual tasks (e.g., persons and cars). Therefore, classification models using only visual features often fail to achieve satisfactory performance. In this article, a novel approach based on a "bottom-up and top-down" framework is proposed. Instead of using visual features of the whole image directly as common image-level models based on convolutional neural networks (CNNs) do, the proposed framework first obtains low-level semantic, namely, the bounding boxes of buildings in street view images through a bottom-up object discovery process. Their contextual information, such as the co-occurrence patterns of building classes and their layout, is then encoded into metadata by the proposed algorithm "Context encOding of Detected buildINGs" (CODING). Finally, these metadata (low-level semantic encoded with context information) are abstracted to high-level semantic, namely, the land use label of the street view image through a top-down semantic aggregation process implemented by a recurrent neural network (RNN). In addition, in order to effectively discover low-level semantic as the bridge between visual features and higher abstract concepts, we made a dual-labeled data set named "Building dEtection And Urban funcTional-zone portraYing" (BEAUTY) of 19070 street view images and 38857 buildings based on the existing BIC_GSV. The data set can be used not only for street view image classification but also for multiclass building detection. Experiments on "BEAUTY" show that the proposed approach achieves a 12.65% performance improvement on macroprecision and 12% on macrorecall over image-level CNN-based models. Our code and data set are available at https://github.com/kyle-one/Context-Encoding-of-Detected-Buildings/ .</abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/TGRS.2021.3064316</doi><tpages>17</tpages><orcidid>https://orcid.org/0000-0001-9949-4693</orcidid><orcidid>https://orcid.org/0000-0001-8247-4207</orcidid></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 0196-2892
ispartof IEEE transactions on geoscience and remote sensing, 2022, Vol.60, p.1-17
issn 0196-2892
1558-0644
language eng
recordid cdi_ieee_primary_9380541
source IEEE Electronic Library (IEL)
subjects Aggregation
Algorithms
Artificial neural networks
Boxes
Building detection
Buildings
Classification
Coding
Context
context encoding
Datasets
Detection
Feature extraction
Image classification
Image coding
Information processing
Land use
Metadata
Neural networks
recurrent neural network (RNN)
Recurrent neural networks
Semantics
street view images classification
Task analysis
Urban areas
urban functional zone
urban land use classification
Visual tasks
Visualization
title Bounding Boxes Are All We Need: Street View Image Classification via Context Encoding of Detected Buildings
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-22T02%3A05%3A57IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Bounding%20Boxes%20Are%20All%20We%20Need:%20Street%20View%20Image%20Classification%20via%20Context%20Encoding%20of%20Detected%20Buildings&rft.jtitle=IEEE%20transactions%20on%20geoscience%20and%20remote%20sensing&rft.au=Zhao,%20Kun&rft.date=2022&rft.volume=60&rft.spage=1&rft.epage=17&rft.pages=1-17&rft.issn=0196-2892&rft.eissn=1558-0644&rft.coden=IGRSD2&rft_id=info:doi/10.1109/TGRS.2021.3064316&rft_dat=%3Cproquest_RIE%3E2607877792%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2607877792&rft_id=info:pmid/&rft_ieee_id=9380541&rfr_iscdi=true