A Large-Scale Virtual Dataset and Egocentric Localization for Disaster Responses
With the increasing social demands of disaster response, methods of visual observation for rescue and safety have become increasingly important. However, because of the shortage of datasets for disaster scenarios, there has been little progress in computer vision and robotics in this field. With thi...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on pattern analysis and machine intelligence 2023-06, Vol.45 (6), p.6766-6782 |
---|---|
Hauptverfasser: | , , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 6782 |
---|---|
container_issue | 6 |
container_start_page | 6766 |
container_title | IEEE transactions on pattern analysis and machine intelligence |
container_volume | 45 |
creator | Jeon, Hae-Gon Im, Sunghoon Lee, Byeong-Uk Rameau, Francois Choi, Dong-Geol Oh, Jean Kweon, In So Hebert, Martial |
description | With the increasing social demands of disaster response, methods of visual observation for rescue and safety have become increasingly important. However, because of the shortage of datasets for disaster scenarios, there has been little progress in computer vision and robotics in this field. With this in mind, we present the first large-scale synthetic dataset of egocentric viewpoints for disaster scenarios. We simulate pre- and post-disaster cases with drastic changes in appearance, such as buildings on fire and earthquakes. The dataset consists of more than 300K high-resolution stereo image pairs, all annotated with ground-truth data for the semantic label, depth in metric scale, optical flow with sub-pixel precision, and surface normal as well as their corresponding camera poses. To create realistic disaster scenes, we manually augment the effects with 3D models using physically-based graphics tools. We train various state-of-the-art methods to perform computer vision tasks using our dataset, evaluate how well these methods recognize the disaster situations, and produce reliable results of virtual scenes as well as real-world images. We also present a convolutional neural network-based egocentric localization method that is robust to drastic appearance changes, such as the texture changes in a fire, and layout changes from a collapse. To address these key challenges, we propose a new model that learns a shape-based representation by training on stylized images, and incorporate the dominant planes of query images as approximate scene coordinates. We evaluate the proposed method using various scenes including a simulated disaster dataset to demonstrate the effectiveness of our method when confronted with significant changes in scene layout. Experimental results show that our method provides reliable camera pose predictions despite vastly changed conditions. |
doi_str_mv | 10.1109/TPAMI.2021.3094531 |
format | Article |
fullrecord | <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_proquest_miscellaneous_2549691070</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9476992</ieee_id><sourcerecordid>2549691070</sourcerecordid><originalsourceid>FETCH-LOGICAL-c351t-ec50d26855a0a431fd8555a51d1c7cc241a028dcb081107d3860002d3c2f03e43</originalsourceid><addsrcrecordid>eNpdkE1PHDEMhiNUBFvgD1CpisSll1mcOJlNjis-WqStQJT2GoWMBw2anWyTmUP59Q3dLQdOtuTnteyHsVMBcyHAnj_cLb_fzCVIMUewSqPYYzNh0Vao0X5gMxC1rIyR5pB9zPkZQCgNeMAOUUmUppYzdrfkK5-eqPoRfE_8V5fGyff80o8-08j90PCrpxhoGFMX-CoWqnvxYxcH3sbEL7vs80iJ31PexCFTPmb7re8znezqEft5ffVw8a1a3X69uViuqoBajBUFDY2sjdYevELRNqXVXotGhEUIUgkP0jThEUz5ddGgqQFANhhkC0gKj9iX7d5Nir8nyqNbdzlQ3_uB4pSd1MrWtkShoGfv0Oc4paFc56QBa6xCiYWSWyqkmHOi1m1St_bpjxPgXn27f77dq2-3811Cn3erp8c1NW-R_4IL8GkLdET0NrZqUVsr8S9DwoIa</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2809894323</pqid></control><display><type>article</type><title>A Large-Scale Virtual Dataset and Egocentric Localization for Disaster Responses</title><source>IEEE Electronic Library (IEL)</source><creator>Jeon, Hae-Gon ; Im, Sunghoon ; Lee, Byeong-Uk ; Rameau, Francois ; Choi, Dong-Geol ; Oh, Jean ; Kweon, In So ; Hebert, Martial</creator><creatorcontrib>Jeon, Hae-Gon ; Im, Sunghoon ; Lee, Byeong-Uk ; Rameau, Francois ; Choi, Dong-Geol ; Oh, Jean ; Kweon, In So ; Hebert, Martial</creatorcontrib><description>With the increasing social demands of disaster response, methods of visual observation for rescue and safety have become increasingly important. However, because of the shortage of datasets for disaster scenarios, there has been little progress in computer vision and robotics in this field. With this in mind, we present the first large-scale synthetic dataset of egocentric viewpoints for disaster scenarios. We simulate pre- and post-disaster cases with drastic changes in appearance, such as buildings on fire and earthquakes. The dataset consists of more than 300K high-resolution stereo image pairs, all annotated with ground-truth data for the semantic label, depth in metric scale, optical flow with sub-pixel precision, and surface normal as well as their corresponding camera poses. To create realistic disaster scenes, we manually augment the effects with 3D models using physically-based graphics tools. We train various state-of-the-art methods to perform computer vision tasks using our dataset, evaluate how well these methods recognize the disaster situations, and produce reliable results of virtual scenes as well as real-world images. We also present a convolutional neural network-based egocentric localization method that is robust to drastic appearance changes, such as the texture changes in a fire, and layout changes from a collapse. To address these key challenges, we propose a new model that learns a shape-based representation by training on stylized images, and incorporate the dominant planes of query images as approximate scene coordinates. We evaluate the proposed method using various scenes including a simulated disaster dataset to demonstrate the effectiveness of our method when confronted with significant changes in scene layout. Experimental results show that our method provides reliable camera pose predictions despite vastly changed conditions.</description><identifier>ISSN: 0162-8828</identifier><identifier>EISSN: 1939-3539</identifier><identifier>EISSN: 2160-9292</identifier><identifier>DOI: 10.1109/TPAMI.2021.3094531</identifier><identifier>PMID: 34232862</identifier><identifier>CODEN: ITPIDJ</identifier><language>eng</language><publisher>United States: IEEE</publisher><subject>Artificial neural networks ; Buildings ; camera relocalization ; Cameras ; Computer vision ; Datasets ; Disaster management ; disaster scenarios ; egocentric localization ; Image resolution ; Large-scale dataset ; Layouts ; Localization ; Localization method ; Location awareness ; Optical flow (image analysis) ; Robotics ; Semantics ; Synthetic data ; Task analysis ; Three dimensional models ; Three-dimensional displays ; Virtual reality ; Visual observation ; visual odometry ; Visualization</subject><ispartof>IEEE transactions on pattern analysis and machine intelligence, 2023-06, Vol.45 (6), p.6766-6782</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c351t-ec50d26855a0a431fd8555a51d1c7cc241a028dcb081107d3860002d3c2f03e43</citedby><cites>FETCH-LOGICAL-c351t-ec50d26855a0a431fd8555a51d1c7cc241a028dcb081107d3860002d3c2f03e43</cites><orcidid>0000-0001-9626-5983 ; 0000-0003-2664-9863 ; 0000-0001-9709-2658 ; 0000-0003-1105-1666 ; 0000-0002-3345-5306 ; 0000-0001-9776-8101</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9476992$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,27924,27925,54758</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9476992$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/34232862$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Jeon, Hae-Gon</creatorcontrib><creatorcontrib>Im, Sunghoon</creatorcontrib><creatorcontrib>Lee, Byeong-Uk</creatorcontrib><creatorcontrib>Rameau, Francois</creatorcontrib><creatorcontrib>Choi, Dong-Geol</creatorcontrib><creatorcontrib>Oh, Jean</creatorcontrib><creatorcontrib>Kweon, In So</creatorcontrib><creatorcontrib>Hebert, Martial</creatorcontrib><title>A Large-Scale Virtual Dataset and Egocentric Localization for Disaster Responses</title><title>IEEE transactions on pattern analysis and machine intelligence</title><addtitle>TPAMI</addtitle><addtitle>IEEE Trans Pattern Anal Mach Intell</addtitle><description>With the increasing social demands of disaster response, methods of visual observation for rescue and safety have become increasingly important. However, because of the shortage of datasets for disaster scenarios, there has been little progress in computer vision and robotics in this field. With this in mind, we present the first large-scale synthetic dataset of egocentric viewpoints for disaster scenarios. We simulate pre- and post-disaster cases with drastic changes in appearance, such as buildings on fire and earthquakes. The dataset consists of more than 300K high-resolution stereo image pairs, all annotated with ground-truth data for the semantic label, depth in metric scale, optical flow with sub-pixel precision, and surface normal as well as their corresponding camera poses. To create realistic disaster scenes, we manually augment the effects with 3D models using physically-based graphics tools. We train various state-of-the-art methods to perform computer vision tasks using our dataset, evaluate how well these methods recognize the disaster situations, and produce reliable results of virtual scenes as well as real-world images. We also present a convolutional neural network-based egocentric localization method that is robust to drastic appearance changes, such as the texture changes in a fire, and layout changes from a collapse. To address these key challenges, we propose a new model that learns a shape-based representation by training on stylized images, and incorporate the dominant planes of query images as approximate scene coordinates. We evaluate the proposed method using various scenes including a simulated disaster dataset to demonstrate the effectiveness of our method when confronted with significant changes in scene layout. Experimental results show that our method provides reliable camera pose predictions despite vastly changed conditions.</description><subject>Artificial neural networks</subject><subject>Buildings</subject><subject>camera relocalization</subject><subject>Cameras</subject><subject>Computer vision</subject><subject>Datasets</subject><subject>Disaster management</subject><subject>disaster scenarios</subject><subject>egocentric localization</subject><subject>Image resolution</subject><subject>Large-scale dataset</subject><subject>Layouts</subject><subject>Localization</subject><subject>Localization method</subject><subject>Location awareness</subject><subject>Optical flow (image analysis)</subject><subject>Robotics</subject><subject>Semantics</subject><subject>Synthetic data</subject><subject>Task analysis</subject><subject>Three dimensional models</subject><subject>Three-dimensional displays</subject><subject>Virtual reality</subject><subject>Visual observation</subject><subject>visual odometry</subject><subject>Visualization</subject><issn>0162-8828</issn><issn>1939-3539</issn><issn>2160-9292</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpdkE1PHDEMhiNUBFvgD1CpisSll1mcOJlNjis-WqStQJT2GoWMBw2anWyTmUP59Q3dLQdOtuTnteyHsVMBcyHAnj_cLb_fzCVIMUewSqPYYzNh0Vao0X5gMxC1rIyR5pB9zPkZQCgNeMAOUUmUppYzdrfkK5-eqPoRfE_8V5fGyff80o8-08j90PCrpxhoGFMX-CoWqnvxYxcH3sbEL7vs80iJ31PexCFTPmb7re8znezqEft5ffVw8a1a3X69uViuqoBajBUFDY2sjdYevELRNqXVXotGhEUIUgkP0jThEUz5ddGgqQFANhhkC0gKj9iX7d5Nir8nyqNbdzlQ3_uB4pSd1MrWtkShoGfv0Oc4paFc56QBa6xCiYWSWyqkmHOi1m1St_bpjxPgXn27f77dq2-3811Cn3erp8c1NW-R_4IL8GkLdET0NrZqUVsr8S9DwoIa</recordid><startdate>20230601</startdate><enddate>20230601</enddate><creator>Jeon, Hae-Gon</creator><creator>Im, Sunghoon</creator><creator>Lee, Byeong-Uk</creator><creator>Rameau, Francois</creator><creator>Choi, Dong-Geol</creator><creator>Oh, Jean</creator><creator>Kweon, In So</creator><creator>Hebert, Martial</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0001-9626-5983</orcidid><orcidid>https://orcid.org/0000-0003-2664-9863</orcidid><orcidid>https://orcid.org/0000-0001-9709-2658</orcidid><orcidid>https://orcid.org/0000-0003-1105-1666</orcidid><orcidid>https://orcid.org/0000-0002-3345-5306</orcidid><orcidid>https://orcid.org/0000-0001-9776-8101</orcidid></search><sort><creationdate>20230601</creationdate><title>A Large-Scale Virtual Dataset and Egocentric Localization for Disaster Responses</title><author>Jeon, Hae-Gon ; Im, Sunghoon ; Lee, Byeong-Uk ; Rameau, Francois ; Choi, Dong-Geol ; Oh, Jean ; Kweon, In So ; Hebert, Martial</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c351t-ec50d26855a0a431fd8555a51d1c7cc241a028dcb081107d3860002d3c2f03e43</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Artificial neural networks</topic><topic>Buildings</topic><topic>camera relocalization</topic><topic>Cameras</topic><topic>Computer vision</topic><topic>Datasets</topic><topic>Disaster management</topic><topic>disaster scenarios</topic><topic>egocentric localization</topic><topic>Image resolution</topic><topic>Large-scale dataset</topic><topic>Layouts</topic><topic>Localization</topic><topic>Localization method</topic><topic>Location awareness</topic><topic>Optical flow (image analysis)</topic><topic>Robotics</topic><topic>Semantics</topic><topic>Synthetic data</topic><topic>Task analysis</topic><topic>Three dimensional models</topic><topic>Three-dimensional displays</topic><topic>Virtual reality</topic><topic>Visual observation</topic><topic>visual odometry</topic><topic>Visualization</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Jeon, Hae-Gon</creatorcontrib><creatorcontrib>Im, Sunghoon</creatorcontrib><creatorcontrib>Lee, Byeong-Uk</creatorcontrib><creatorcontrib>Rameau, Francois</creatorcontrib><creatorcontrib>Choi, Dong-Geol</creatorcontrib><creatorcontrib>Oh, Jean</creatorcontrib><creatorcontrib>Kweon, In So</creatorcontrib><creatorcontrib>Hebert, Martial</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>MEDLINE - Academic</collection><jtitle>IEEE transactions on pattern analysis and machine intelligence</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Jeon, Hae-Gon</au><au>Im, Sunghoon</au><au>Lee, Byeong-Uk</au><au>Rameau, Francois</au><au>Choi, Dong-Geol</au><au>Oh, Jean</au><au>Kweon, In So</au><au>Hebert, Martial</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>A Large-Scale Virtual Dataset and Egocentric Localization for Disaster Responses</atitle><jtitle>IEEE transactions on pattern analysis and machine intelligence</jtitle><stitle>TPAMI</stitle><addtitle>IEEE Trans Pattern Anal Mach Intell</addtitle><date>2023-06-01</date><risdate>2023</risdate><volume>45</volume><issue>6</issue><spage>6766</spage><epage>6782</epage><pages>6766-6782</pages><issn>0162-8828</issn><eissn>1939-3539</eissn><eissn>2160-9292</eissn><coden>ITPIDJ</coden><abstract>With the increasing social demands of disaster response, methods of visual observation for rescue and safety have become increasingly important. However, because of the shortage of datasets for disaster scenarios, there has been little progress in computer vision and robotics in this field. With this in mind, we present the first large-scale synthetic dataset of egocentric viewpoints for disaster scenarios. We simulate pre- and post-disaster cases with drastic changes in appearance, such as buildings on fire and earthquakes. The dataset consists of more than 300K high-resolution stereo image pairs, all annotated with ground-truth data for the semantic label, depth in metric scale, optical flow with sub-pixel precision, and surface normal as well as their corresponding camera poses. To create realistic disaster scenes, we manually augment the effects with 3D models using physically-based graphics tools. We train various state-of-the-art methods to perform computer vision tasks using our dataset, evaluate how well these methods recognize the disaster situations, and produce reliable results of virtual scenes as well as real-world images. We also present a convolutional neural network-based egocentric localization method that is robust to drastic appearance changes, such as the texture changes in a fire, and layout changes from a collapse. To address these key challenges, we propose a new model that learns a shape-based representation by training on stylized images, and incorporate the dominant planes of query images as approximate scene coordinates. We evaluate the proposed method using various scenes including a simulated disaster dataset to demonstrate the effectiveness of our method when confronted with significant changes in scene layout. Experimental results show that our method provides reliable camera pose predictions despite vastly changed conditions.</abstract><cop>United States</cop><pub>IEEE</pub><pmid>34232862</pmid><doi>10.1109/TPAMI.2021.3094531</doi><tpages>17</tpages><orcidid>https://orcid.org/0000-0001-9626-5983</orcidid><orcidid>https://orcid.org/0000-0003-2664-9863</orcidid><orcidid>https://orcid.org/0000-0001-9709-2658</orcidid><orcidid>https://orcid.org/0000-0003-1105-1666</orcidid><orcidid>https://orcid.org/0000-0002-3345-5306</orcidid><orcidid>https://orcid.org/0000-0001-9776-8101</orcidid></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 0162-8828 |
ispartof | IEEE transactions on pattern analysis and machine intelligence, 2023-06, Vol.45 (6), p.6766-6782 |
issn | 0162-8828 1939-3539 2160-9292 |
language | eng |
recordid | cdi_proquest_miscellaneous_2549691070 |
source | IEEE Electronic Library (IEL) |
subjects | Artificial neural networks Buildings camera relocalization Cameras Computer vision Datasets Disaster management disaster scenarios egocentric localization Image resolution Large-scale dataset Layouts Localization Localization method Location awareness Optical flow (image analysis) Robotics Semantics Synthetic data Task analysis Three dimensional models Three-dimensional displays Virtual reality Visual observation visual odometry Visualization |
title | A Large-Scale Virtual Dataset and Egocentric Localization for Disaster Responses |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-05T14%3A35%3A19IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=A%20Large-Scale%20Virtual%20Dataset%20and%20Egocentric%20Localization%20for%20Disaster%20Responses&rft.jtitle=IEEE%20transactions%20on%20pattern%20analysis%20and%20machine%20intelligence&rft.au=Jeon,%20Hae-Gon&rft.date=2023-06-01&rft.volume=45&rft.issue=6&rft.spage=6766&rft.epage=6782&rft.pages=6766-6782&rft.issn=0162-8828&rft.eissn=1939-3539&rft.coden=ITPIDJ&rft_id=info:doi/10.1109/TPAMI.2021.3094531&rft_dat=%3Cproquest_RIE%3E2549691070%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2809894323&rft_id=info:pmid/34232862&rft_ieee_id=9476992&rfr_iscdi=true |