ViViD++ : Vision for Visibility Dataset

In this letter, we present a dataset capturing diverse visual data formats that target varying luminance conditions. While RGB cameras provide nourishing and intuitive information, changes in lighting conditions potentially result in catastrophic failure for robotic applications based on vision sens...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE robotics and automation letters 2022-07, Vol.7 (3), p.6282-6289
Hauptverfasser: Lee, Alex Junho, Cho, Younggun, Shin, Young-sik, Kim, Ayoung, Myung, Hyun
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 6289
container_issue 3
container_start_page 6282
container_title IEEE robotics and automation letters
container_volume 7
creator Lee, Alex Junho
Cho, Younggun
Shin, Young-sik
Kim, Ayoung
Myung, Hyun
description In this letter, we present a dataset capturing diverse visual data formats that target varying luminance conditions. While RGB cameras provide nourishing and intuitive information, changes in lighting conditions potentially result in catastrophic failure for robotic applications based on vision sensors. Approaches overcoming illumination problems have included developing more robust algorithms or other types of visual sensors, such as thermal and event cameras. Despite the alternative sensors' potential, there still are few datasets with alternative vision sensors. Thus, we provided a dataset recorded from alternative vision sensors, by handheld or mounted on a car, repeatedly in the same space but in different conditions. We aim to acquire visible information from co-aligned alternative vision sensors. Our sensor system collects data more independently from visible light intensity by measuring the amount of infrared dissipation, depth by structured reflection, and instantaneous temporal changes in luminance. We provide these measurements along with inertial sensors and ground-truth for developing robust visual SLAM under poor illumination.
doi_str_mv 10.1109/LRA.2022.3168335
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_ieee_primary_9760091</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9760091</ieee_id><sourcerecordid>2659347477</sourcerecordid><originalsourceid>FETCH-LOGICAL-c291t-8909c5009e63ac311eb58679d109e5976503c9055790190fbcd0b2450fef7a393</originalsourceid><addsrcrecordid>eNpNkM1Lw0AQxRdRsNTeBS8BDx5K6uxOdjfjrbR-QUAQzXVJ0g1sqU3dTQ_9792aIp7mHd578_gxds1hxjnQffE-nwkQYoZc5YjyjI0Eap2iVur8n75kkxDWAMCl0EhyxO5KV7rldJo8JKULrtsmbed_Ze02rj8ky6qvgu2v2EVbbYKdnO6YfT49fixe0uLt-XUxL9JGEO_TnIAaCUBWYdUg57aWudK0iiutJK0kYEMgpSbgBG3drKAWmYTWtrpCwjG7HXp3vvve29Cbdbf32_jSCCUJM51pHV0wuBrfheBta3befVX-YDiYIxETiZgjEXMiEiM3Q8RZa__scVEcy_EHfRNYBQ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2659347477</pqid></control><display><type>article</type><title>ViViD++ : Vision for Visibility Dataset</title><source>IEEE Electronic Library (IEL)</source><creator>Lee, Alex Junho ; Cho, Younggun ; Shin, Young-sik ; Kim, Ayoung ; Myung, Hyun</creator><creatorcontrib>Lee, Alex Junho ; Cho, Younggun ; Shin, Young-sik ; Kim, Ayoung ; Myung, Hyun</creatorcontrib><description>In this letter, we present a dataset capturing diverse visual data formats that target varying luminance conditions. While RGB cameras provide nourishing and intuitive information, changes in lighting conditions potentially result in catastrophic failure for robotic applications based on vision sensors. Approaches overcoming illumination problems have included developing more robust algorithms or other types of visual sensors, such as thermal and event cameras. Despite the alternative sensors' potential, there still are few datasets with alternative vision sensors. Thus, we provided a dataset recorded from alternative vision sensors, by handheld or mounted on a car, repeatedly in the same space but in different conditions. We aim to acquire visible information from co-aligned alternative vision sensors. Our sensor system collects data more independently from visible light intensity by measuring the amount of infrared dissipation, depth by structured reflection, and instantaneous temporal changes in luminance. We provide these measurements along with inertial sensors and ground-truth for developing robust visual SLAM under poor illumination.</description><identifier>ISSN: 2377-3766</identifier><identifier>EISSN: 2377-3766</identifier><identifier>DOI: 10.1109/LRA.2022.3168335</identifier><identifier>CODEN: IRALC6</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>Algorithms ; Cameras ; Catastrophic events ; Data collection ; data sets for robot learning ; data sets for robotic vision ; Data sets for SLAM ; Datasets ; Illumination ; Inertial sensing devices ; Lighting ; Luminance ; Luminous intensity ; Robotics ; Robustness ; Sensor phenomena and characterization ; Sensors ; Simultaneous localization and mapping ; Thermal sensors ; Visibility ; Visualization</subject><ispartof>IEEE robotics and automation letters, 2022-07, Vol.7 (3), p.6282-6289</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c291t-8909c5009e63ac311eb58679d109e5976503c9055790190fbcd0b2450fef7a393</citedby><cites>FETCH-LOGICAL-c291t-8909c5009e63ac311eb58679d109e5976503c9055790190fbcd0b2450fef7a393</cites><orcidid>0000-0002-9653-0633 ; 0000-0003-2025-7770 ; 0000-0001-5638-7417 ; 0000-0001-9829-2408 ; 0000-0002-5799-2026</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9760091$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,776,780,792,27901,27902,54733</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9760091$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Lee, Alex Junho</creatorcontrib><creatorcontrib>Cho, Younggun</creatorcontrib><creatorcontrib>Shin, Young-sik</creatorcontrib><creatorcontrib>Kim, Ayoung</creatorcontrib><creatorcontrib>Myung, Hyun</creatorcontrib><title>ViViD++ : Vision for Visibility Dataset</title><title>IEEE robotics and automation letters</title><addtitle>LRA</addtitle><description>In this letter, we present a dataset capturing diverse visual data formats that target varying luminance conditions. While RGB cameras provide nourishing and intuitive information, changes in lighting conditions potentially result in catastrophic failure for robotic applications based on vision sensors. Approaches overcoming illumination problems have included developing more robust algorithms or other types of visual sensors, such as thermal and event cameras. Despite the alternative sensors' potential, there still are few datasets with alternative vision sensors. Thus, we provided a dataset recorded from alternative vision sensors, by handheld or mounted on a car, repeatedly in the same space but in different conditions. We aim to acquire visible information from co-aligned alternative vision sensors. Our sensor system collects data more independently from visible light intensity by measuring the amount of infrared dissipation, depth by structured reflection, and instantaneous temporal changes in luminance. We provide these measurements along with inertial sensors and ground-truth for developing robust visual SLAM under poor illumination.</description><subject>Algorithms</subject><subject>Cameras</subject><subject>Catastrophic events</subject><subject>Data collection</subject><subject>data sets for robot learning</subject><subject>data sets for robotic vision</subject><subject>Data sets for SLAM</subject><subject>Datasets</subject><subject>Illumination</subject><subject>Inertial sensing devices</subject><subject>Lighting</subject><subject>Luminance</subject><subject>Luminous intensity</subject><subject>Robotics</subject><subject>Robustness</subject><subject>Sensor phenomena and characterization</subject><subject>Sensors</subject><subject>Simultaneous localization and mapping</subject><subject>Thermal sensors</subject><subject>Visibility</subject><subject>Visualization</subject><issn>2377-3766</issn><issn>2377-3766</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpNkM1Lw0AQxRdRsNTeBS8BDx5K6uxOdjfjrbR-QUAQzXVJ0g1sqU3dTQ_9792aIp7mHd578_gxds1hxjnQffE-nwkQYoZc5YjyjI0Eap2iVur8n75kkxDWAMCl0EhyxO5KV7rldJo8JKULrtsmbed_Ze02rj8ky6qvgu2v2EVbbYKdnO6YfT49fixe0uLt-XUxL9JGEO_TnIAaCUBWYdUg57aWudK0iiutJK0kYEMgpSbgBG3drKAWmYTWtrpCwjG7HXp3vvve29Cbdbf32_jSCCUJM51pHV0wuBrfheBta3befVX-YDiYIxETiZgjEXMiEiM3Q8RZa__scVEcy_EHfRNYBQ</recordid><startdate>20220701</startdate><enddate>20220701</enddate><creator>Lee, Alex Junho</creator><creator>Cho, Younggun</creator><creator>Shin, Young-sik</creator><creator>Kim, Ayoung</creator><creator>Myung, Hyun</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0002-9653-0633</orcidid><orcidid>https://orcid.org/0000-0003-2025-7770</orcidid><orcidid>https://orcid.org/0000-0001-5638-7417</orcidid><orcidid>https://orcid.org/0000-0001-9829-2408</orcidid><orcidid>https://orcid.org/0000-0002-5799-2026</orcidid></search><sort><creationdate>20220701</creationdate><title>ViViD++ : Vision for Visibility Dataset</title><author>Lee, Alex Junho ; Cho, Younggun ; Shin, Young-sik ; Kim, Ayoung ; Myung, Hyun</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c291t-8909c5009e63ac311eb58679d109e5976503c9055790190fbcd0b2450fef7a393</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Algorithms</topic><topic>Cameras</topic><topic>Catastrophic events</topic><topic>Data collection</topic><topic>data sets for robot learning</topic><topic>data sets for robotic vision</topic><topic>Data sets for SLAM</topic><topic>Datasets</topic><topic>Illumination</topic><topic>Inertial sensing devices</topic><topic>Lighting</topic><topic>Luminance</topic><topic>Luminous intensity</topic><topic>Robotics</topic><topic>Robustness</topic><topic>Sensor phenomena and characterization</topic><topic>Sensors</topic><topic>Simultaneous localization and mapping</topic><topic>Thermal sensors</topic><topic>Visibility</topic><topic>Visualization</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Lee, Alex Junho</creatorcontrib><creatorcontrib>Cho, Younggun</creatorcontrib><creatorcontrib>Shin, Young-sik</creatorcontrib><creatorcontrib>Kim, Ayoung</creatorcontrib><creatorcontrib>Myung, Hyun</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE robotics and automation letters</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Lee, Alex Junho</au><au>Cho, Younggun</au><au>Shin, Young-sik</au><au>Kim, Ayoung</au><au>Myung, Hyun</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>ViViD++ : Vision for Visibility Dataset</atitle><jtitle>IEEE robotics and automation letters</jtitle><stitle>LRA</stitle><date>2022-07-01</date><risdate>2022</risdate><volume>7</volume><issue>3</issue><spage>6282</spage><epage>6289</epage><pages>6282-6289</pages><issn>2377-3766</issn><eissn>2377-3766</eissn><coden>IRALC6</coden><abstract>In this letter, we present a dataset capturing diverse visual data formats that target varying luminance conditions. While RGB cameras provide nourishing and intuitive information, changes in lighting conditions potentially result in catastrophic failure for robotic applications based on vision sensors. Approaches overcoming illumination problems have included developing more robust algorithms or other types of visual sensors, such as thermal and event cameras. Despite the alternative sensors' potential, there still are few datasets with alternative vision sensors. Thus, we provided a dataset recorded from alternative vision sensors, by handheld or mounted on a car, repeatedly in the same space but in different conditions. We aim to acquire visible information from co-aligned alternative vision sensors. Our sensor system collects data more independently from visible light intensity by measuring the amount of infrared dissipation, depth by structured reflection, and instantaneous temporal changes in luminance. We provide these measurements along with inertial sensors and ground-truth for developing robust visual SLAM under poor illumination.</abstract><cop>Piscataway</cop><pub>IEEE</pub><doi>10.1109/LRA.2022.3168335</doi><tpages>8</tpages><orcidid>https://orcid.org/0000-0002-9653-0633</orcidid><orcidid>https://orcid.org/0000-0003-2025-7770</orcidid><orcidid>https://orcid.org/0000-0001-5638-7417</orcidid><orcidid>https://orcid.org/0000-0001-9829-2408</orcidid><orcidid>https://orcid.org/0000-0002-5799-2026</orcidid></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 2377-3766
ispartof IEEE robotics and automation letters, 2022-07, Vol.7 (3), p.6282-6289
issn 2377-3766
2377-3766
language eng
recordid cdi_ieee_primary_9760091
source IEEE Electronic Library (IEL)
subjects Algorithms
Cameras
Catastrophic events
Data collection
data sets for robot learning
data sets for robotic vision
Data sets for SLAM
Datasets
Illumination
Inertial sensing devices
Lighting
Luminance
Luminous intensity
Robotics
Robustness
Sensor phenomena and characterization
Sensors
Simultaneous localization and mapping
Thermal sensors
Visibility
Visualization
title ViViD++ : Vision for Visibility Dataset
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-19T03%3A21%3A35IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=ViViD++%20:%20Vision%20for%20Visibility%20Dataset&rft.jtitle=IEEE%20robotics%20and%20automation%20letters&rft.au=Lee,%20Alex%20Junho&rft.date=2022-07-01&rft.volume=7&rft.issue=3&rft.spage=6282&rft.epage=6289&rft.pages=6282-6289&rft.issn=2377-3766&rft.eissn=2377-3766&rft.coden=IRALC6&rft_id=info:doi/10.1109/LRA.2022.3168335&rft_dat=%3Cproquest_RIE%3E2659347477%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2659347477&rft_id=info:pmid/&rft_ieee_id=9760091&rfr_iscdi=true