Urban-Area Segmentation Using Visual Words

In this letter, we address the problem of urban-area extraction by using a feature-free image representation concept known as ldquoVisual Words.rdquo This method is based on building a ldquodictionaryrdquo of small patches, some of which appear mainly in urban areas. The proposed algorithm is based...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE geoscience and remote sensing letters 2009-07, Vol.6 (3), p.388-392
Hauptverfasser: Weizman, L., Goldberger, J.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 392
container_issue 3
container_start_page 388
container_title IEEE geoscience and remote sensing letters
container_volume 6
creator Weizman, L.
Goldberger, J.
description In this letter, we address the problem of urban-area extraction by using a feature-free image representation concept known as ldquoVisual Words.rdquo This method is based on building a ldquodictionaryrdquo of small patches, some of which appear mainly in urban areas. The proposed algorithm is based on a new pixel-level variant of visual words and is based on three parts: building a visual dictionary, learning urban words from labeled images, and detecting urban regions in a new image. Using normalized patches makes the method more robust to changes in illumination during acquisition time. The improved performance of the method is demonstrated on real satellite images from three different sensors: LANDSAT, SPOT, and IKONOS. To assess the robustness of our method, the learning and testing procedures were carried out on different and independent images.
doi_str_mv 10.1109/LGRS.2009.2014400
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_crossref_primary_10_1109_LGRS_2009_2014400</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>4787085</ieee_id><sourcerecordid>875032241</sourcerecordid><originalsourceid>FETCH-LOGICAL-c398t-fc75e89cd425b36ec643d6c07f6b3f6920940aac0ba56fabc124f0fd726dddd03</originalsourceid><addsrcrecordid>eNp9kE1Lw0AQhhdRsFZ_gHgJHhSE1Nmv7O6xFK1CQbBWvS2bzW5JSZO6mxz89ya0ePDgHGbm8LwD8yB0iWGCMaj7xfx1OSEAqm-YMYAjNMKcyxS4wMfDznjKlfw8RWcxbgAIk1KM0N0q5KZOp8GZZOnWW1e3pi2bOlnFsl4n72XsTJV8NKGI5-jEmyq6i8Mco9Xjw9vsKV28zJ9n00VqqZJt6q3gTipbMMJzmjmbMVpkFoTPcuozRUAxMMZCbnjmTW4xYR58IUhW9AV0jG73d3eh-epcbPW2jNZVlald00UtBQdKCMM9efMvSTkQwYjswes_4KbpQt1_oWWGBcVEsR7Ce8iGJsbgvN6FcmvCt8agB8l6kKwHyfoguc9c7TOlc-6XZ0IKkJz-AMxQdrY</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>861731294</pqid></control><display><type>article</type><title>Urban-Area Segmentation Using Visual Words</title><source>IEEE Electronic Library (IEL)</source><creator>Weizman, L. ; Goldberger, J.</creator><creatorcontrib>Weizman, L. ; Goldberger, J.</creatorcontrib><description>In this letter, we address the problem of urban-area extraction by using a feature-free image representation concept known as ldquoVisual Words.rdquo This method is based on building a ldquodictionaryrdquo of small patches, some of which appear mainly in urban areas. The proposed algorithm is based on a new pixel-level variant of visual words and is based on three parts: building a visual dictionary, learning urban words from labeled images, and detecting urban regions in a new image. Using normalized patches makes the method more robust to changes in illumination during acquisition time. The improved performance of the method is demonstrated on real satellite images from three different sensors: LANDSAT, SPOT, and IKONOS. To assess the robustness of our method, the learning and testing procedures were carried out on different and independent images.</description><identifier>ISSN: 1545-598X</identifier><identifier>EISSN: 1558-0571</identifier><identifier>DOI: 10.1109/LGRS.2009.2014400</identifier><identifier>CODEN: IGRSBY</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>Dictionaries ; Illumination ; Image representation ; Image segmentation ; Image sensors ; Learning ; Lighting ; Map updating ; object detection ; Pixel ; Remote sensing ; Representations ; Robustness ; Satellites ; Segmentation ; Urban areas ; Visual ; visual words</subject><ispartof>IEEE geoscience and remote sensing letters, 2009-07, Vol.6 (3), p.388-392</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2009</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c398t-fc75e89cd425b36ec643d6c07f6b3f6920940aac0ba56fabc124f0fd726dddd03</citedby><cites>FETCH-LOGICAL-c398t-fc75e89cd425b36ec643d6c07f6b3f6920940aac0ba56fabc124f0fd726dddd03</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/4787085$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>315,781,785,797,27929,27930,54763</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/4787085$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Weizman, L.</creatorcontrib><creatorcontrib>Goldberger, J.</creatorcontrib><title>Urban-Area Segmentation Using Visual Words</title><title>IEEE geoscience and remote sensing letters</title><addtitle>LGRS</addtitle><description>In this letter, we address the problem of urban-area extraction by using a feature-free image representation concept known as ldquoVisual Words.rdquo This method is based on building a ldquodictionaryrdquo of small patches, some of which appear mainly in urban areas. The proposed algorithm is based on a new pixel-level variant of visual words and is based on three parts: building a visual dictionary, learning urban words from labeled images, and detecting urban regions in a new image. Using normalized patches makes the method more robust to changes in illumination during acquisition time. The improved performance of the method is demonstrated on real satellite images from three different sensors: LANDSAT, SPOT, and IKONOS. To assess the robustness of our method, the learning and testing procedures were carried out on different and independent images.</description><subject>Dictionaries</subject><subject>Illumination</subject><subject>Image representation</subject><subject>Image segmentation</subject><subject>Image sensors</subject><subject>Learning</subject><subject>Lighting</subject><subject>Map updating</subject><subject>object detection</subject><subject>Pixel</subject><subject>Remote sensing</subject><subject>Representations</subject><subject>Robustness</subject><subject>Satellites</subject><subject>Segmentation</subject><subject>Urban areas</subject><subject>Visual</subject><subject>visual words</subject><issn>1545-598X</issn><issn>1558-0571</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2009</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNp9kE1Lw0AQhhdRsFZ_gHgJHhSE1Nmv7O6xFK1CQbBWvS2bzW5JSZO6mxz89ya0ePDgHGbm8LwD8yB0iWGCMaj7xfx1OSEAqm-YMYAjNMKcyxS4wMfDznjKlfw8RWcxbgAIk1KM0N0q5KZOp8GZZOnWW1e3pi2bOlnFsl4n72XsTJV8NKGI5-jEmyq6i8Mco9Xjw9vsKV28zJ9n00VqqZJt6q3gTipbMMJzmjmbMVpkFoTPcuozRUAxMMZCbnjmTW4xYR58IUhW9AV0jG73d3eh-epcbPW2jNZVlald00UtBQdKCMM9efMvSTkQwYjswes_4KbpQt1_oWWGBcVEsR7Ce8iGJsbgvN6FcmvCt8agB8l6kKwHyfoguc9c7TOlc-6XZ0IKkJz-AMxQdrY</recordid><startdate>20090701</startdate><enddate>20090701</enddate><creator>Weizman, L.</creator><creator>Goldberger, J.</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>7TG</scope><scope>7UA</scope><scope>8FD</scope><scope>C1K</scope><scope>F1W</scope><scope>FR3</scope><scope>H8D</scope><scope>H96</scope><scope>JQ2</scope><scope>KL.</scope><scope>KR7</scope><scope>L.G</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>F28</scope></search><sort><creationdate>20090701</creationdate><title>Urban-Area Segmentation Using Visual Words</title><author>Weizman, L. ; Goldberger, J.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c398t-fc75e89cd425b36ec643d6c07f6b3f6920940aac0ba56fabc124f0fd726dddd03</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2009</creationdate><topic>Dictionaries</topic><topic>Illumination</topic><topic>Image representation</topic><topic>Image segmentation</topic><topic>Image sensors</topic><topic>Learning</topic><topic>Lighting</topic><topic>Map updating</topic><topic>object detection</topic><topic>Pixel</topic><topic>Remote sensing</topic><topic>Representations</topic><topic>Robustness</topic><topic>Satellites</topic><topic>Segmentation</topic><topic>Urban areas</topic><topic>Visual</topic><topic>visual words</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Weizman, L.</creatorcontrib><creatorcontrib>Goldberger, J.</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Meteorological &amp; Geoastrophysical Abstracts</collection><collection>Water Resources Abstracts</collection><collection>Technology Research Database</collection><collection>Environmental Sciences and Pollution Management</collection><collection>ASFA: Aquatic Sciences and Fisheries Abstracts</collection><collection>Engineering Research Database</collection><collection>Aerospace Database</collection><collection>Aquatic Science &amp; Fisheries Abstracts (ASFA) 2: Ocean Technology, Policy &amp; Non-Living Resources</collection><collection>ProQuest Computer Science Collection</collection><collection>Meteorological &amp; Geoastrophysical Abstracts - Academic</collection><collection>Civil Engineering Abstracts</collection><collection>Aquatic Science &amp; Fisheries Abstracts (ASFA) Professional</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>ANTE: Abstracts in New Technology &amp; Engineering</collection><jtitle>IEEE geoscience and remote sensing letters</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Weizman, L.</au><au>Goldberger, J.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Urban-Area Segmentation Using Visual Words</atitle><jtitle>IEEE geoscience and remote sensing letters</jtitle><stitle>LGRS</stitle><date>2009-07-01</date><risdate>2009</risdate><volume>6</volume><issue>3</issue><spage>388</spage><epage>392</epage><pages>388-392</pages><issn>1545-598X</issn><eissn>1558-0571</eissn><coden>IGRSBY</coden><abstract>In this letter, we address the problem of urban-area extraction by using a feature-free image representation concept known as ldquoVisual Words.rdquo This method is based on building a ldquodictionaryrdquo of small patches, some of which appear mainly in urban areas. The proposed algorithm is based on a new pixel-level variant of visual words and is based on three parts: building a visual dictionary, learning urban words from labeled images, and detecting urban regions in a new image. Using normalized patches makes the method more robust to changes in illumination during acquisition time. The improved performance of the method is demonstrated on real satellite images from three different sensors: LANDSAT, SPOT, and IKONOS. To assess the robustness of our method, the learning and testing procedures were carried out on different and independent images.</abstract><cop>Piscataway</cop><pub>IEEE</pub><doi>10.1109/LGRS.2009.2014400</doi><tpages>5</tpages><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 1545-598X
ispartof IEEE geoscience and remote sensing letters, 2009-07, Vol.6 (3), p.388-392
issn 1545-598X
1558-0571
language eng
recordid cdi_crossref_primary_10_1109_LGRS_2009_2014400
source IEEE Electronic Library (IEL)
subjects Dictionaries
Illumination
Image representation
Image segmentation
Image sensors
Learning
Lighting
Map updating
object detection
Pixel
Remote sensing
Representations
Robustness
Satellites
Segmentation
Urban areas
Visual
visual words
title Urban-Area Segmentation Using Visual Words
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-12T07%3A58%3A35IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Urban-Area%20Segmentation%20Using%20Visual%20Words&rft.jtitle=IEEE%20geoscience%20and%20remote%20sensing%20letters&rft.au=Weizman,%20L.&rft.date=2009-07-01&rft.volume=6&rft.issue=3&rft.spage=388&rft.epage=392&rft.pages=388-392&rft.issn=1545-598X&rft.eissn=1558-0571&rft.coden=IGRSBY&rft_id=info:doi/10.1109/LGRS.2009.2014400&rft_dat=%3Cproquest_RIE%3E875032241%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=861731294&rft_id=info:pmid/&rft_ieee_id=4787085&rfr_iscdi=true