CADDY Underwater Stereo-Vision Dataset for Human-Robot Interaction (HRI) in the Context of Diver Activities

In this article we present a novel underwater dataset collected from several field trials within the EU FP7 project "Cognitive autonomous diving buddy (CADDY)", where an Autonomous Underwater Vehicle (AUV) was used to interact with divers and monitor their activities. To our knowledge, thi...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Chavez, Arturo Gomez, Ranieri, Andrea, Chiarella, Davide, Zereik, Enrica, Babić, Anja, Birk, Andreas
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Chavez, Arturo Gomez
Ranieri, Andrea
Chiarella, Davide
Zereik, Enrica
Babić, Anja
Birk, Andreas
description In this article we present a novel underwater dataset collected from several field trials within the EU FP7 project "Cognitive autonomous diving buddy (CADDY)", where an Autonomous Underwater Vehicle (AUV) was used to interact with divers and monitor their activities. To our knowledge, this is one of the first efforts to collect a large dataset in underwater environments targeting object classification, segmentation and human pose estimation tasks. The first part of the dataset contains stereo camera recordings (~10K) of divers performing hand gestures to communicate and interact with an AUV in different environmental conditions. These gestures samples serve to test the robustness of object detection and classification algorithms against underwater image distortions i.e., color attenuation and light backscatter. The second part includes stereo footage (~12.7K) of divers free-swimming in front of the AUV, along with synchronized IMUs measurements located throughout the diver's suit (DiverNet) which serve as ground-truth for human pose and tracking methods. In both cases, these rectified images allow investigation of 3D representation and reasoning pipelines from low-texture targets commonly present in underwater scenarios. In this paper we describe our recording platform, sensor calibration procedure plus the data format and the utilities provided to use the dataset.
doi_str_mv 10.48550/arxiv.1807.04856
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_1807_04856</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>1807_04856</sourcerecordid><originalsourceid>FETCH-LOGICAL-a676-eb9bba164ed644c61b8a5c82f3ff71bb5d637cb043207a36cbf22ffa3eeb3253</originalsourceid><addsrcrecordid>eNotj0FLxDAQhXPxIKs_wJNz1ENr2rRpPZZWbWFB2FXBU5m0Ewy6jaSxrv_e7OrlDbz5ePAxdpHwOCvznN-g25slTkpexDw08pS911XTvMLzNJL7Rk8OtiHIRi9mNnaCBj3O5EFbB-3XDqdoY5X10E0Bw8EfmKt2012DmcC_EdQ2fPYerIbGLGGvCtBivKH5jJ1o_Jjp_P-u2Pb-7qluo_XjQ1dX6whlISNSt0phIjMaZZYNMlEl5kOZaqF1kSiVj1IUg-KZSHmBQg5Kp6nWKIiUSHOxYpd_q0fZ_tOZHbqf_iDdH6XFL8aBUrY</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>CADDY Underwater Stereo-Vision Dataset for Human-Robot Interaction (HRI) in the Context of Diver Activities</title><source>arXiv.org</source><creator>Chavez, Arturo Gomez ; Ranieri, Andrea ; Chiarella, Davide ; Zereik, Enrica ; Babić, Anja ; Birk, Andreas</creator><creatorcontrib>Chavez, Arturo Gomez ; Ranieri, Andrea ; Chiarella, Davide ; Zereik, Enrica ; Babić, Anja ; Birk, Andreas</creatorcontrib><description>In this article we present a novel underwater dataset collected from several field trials within the EU FP7 project "Cognitive autonomous diving buddy (CADDY)", where an Autonomous Underwater Vehicle (AUV) was used to interact with divers and monitor their activities. To our knowledge, this is one of the first efforts to collect a large dataset in underwater environments targeting object classification, segmentation and human pose estimation tasks. The first part of the dataset contains stereo camera recordings (~10K) of divers performing hand gestures to communicate and interact with an AUV in different environmental conditions. These gestures samples serve to test the robustness of object detection and classification algorithms against underwater image distortions i.e., color attenuation and light backscatter. The second part includes stereo footage (~12.7K) of divers free-swimming in front of the AUV, along with synchronized IMUs measurements located throughout the diver's suit (DiverNet) which serve as ground-truth for human pose and tracking methods. In both cases, these rectified images allow investigation of 3D representation and reasoning pipelines from low-texture targets commonly present in underwater scenarios. In this paper we describe our recording platform, sensor calibration procedure plus the data format and the utilities provided to use the dataset.</description><identifier>DOI: 10.48550/arxiv.1807.04856</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Robotics</subject><creationdate>2018-07</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/1807.04856$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.1807.04856$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Chavez, Arturo Gomez</creatorcontrib><creatorcontrib>Ranieri, Andrea</creatorcontrib><creatorcontrib>Chiarella, Davide</creatorcontrib><creatorcontrib>Zereik, Enrica</creatorcontrib><creatorcontrib>Babić, Anja</creatorcontrib><creatorcontrib>Birk, Andreas</creatorcontrib><title>CADDY Underwater Stereo-Vision Dataset for Human-Robot Interaction (HRI) in the Context of Diver Activities</title><description>In this article we present a novel underwater dataset collected from several field trials within the EU FP7 project "Cognitive autonomous diving buddy (CADDY)", where an Autonomous Underwater Vehicle (AUV) was used to interact with divers and monitor their activities. To our knowledge, this is one of the first efforts to collect a large dataset in underwater environments targeting object classification, segmentation and human pose estimation tasks. The first part of the dataset contains stereo camera recordings (~10K) of divers performing hand gestures to communicate and interact with an AUV in different environmental conditions. These gestures samples serve to test the robustness of object detection and classification algorithms against underwater image distortions i.e., color attenuation and light backscatter. The second part includes stereo footage (~12.7K) of divers free-swimming in front of the AUV, along with synchronized IMUs measurements located throughout the diver's suit (DiverNet) which serve as ground-truth for human pose and tracking methods. In both cases, these rectified images allow investigation of 3D representation and reasoning pipelines from low-texture targets commonly present in underwater scenarios. In this paper we describe our recording platform, sensor calibration procedure plus the data format and the utilities provided to use the dataset.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Computer Science - Robotics</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2018</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj0FLxDAQhXPxIKs_wJNz1ENr2rRpPZZWbWFB2FXBU5m0Ewy6jaSxrv_e7OrlDbz5ePAxdpHwOCvznN-g25slTkpexDw08pS911XTvMLzNJL7Rk8OtiHIRi9mNnaCBj3O5EFbB-3XDqdoY5X10E0Bw8EfmKt2012DmcC_EdQ2fPYerIbGLGGvCtBivKH5jJ1o_Jjp_P-u2Pb-7qluo_XjQ1dX6whlISNSt0phIjMaZZYNMlEl5kOZaqF1kSiVj1IUg-KZSHmBQg5Kp6nWKIiUSHOxYpd_q0fZ_tOZHbqf_iDdH6XFL8aBUrY</recordid><startdate>20180712</startdate><enddate>20180712</enddate><creator>Chavez, Arturo Gomez</creator><creator>Ranieri, Andrea</creator><creator>Chiarella, Davide</creator><creator>Zereik, Enrica</creator><creator>Babić, Anja</creator><creator>Birk, Andreas</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20180712</creationdate><title>CADDY Underwater Stereo-Vision Dataset for Human-Robot Interaction (HRI) in the Context of Diver Activities</title><author>Chavez, Arturo Gomez ; Ranieri, Andrea ; Chiarella, Davide ; Zereik, Enrica ; Babić, Anja ; Birk, Andreas</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a676-eb9bba164ed644c61b8a5c82f3ff71bb5d637cb043207a36cbf22ffa3eeb3253</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2018</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Computer Science - Robotics</topic><toplevel>online_resources</toplevel><creatorcontrib>Chavez, Arturo Gomez</creatorcontrib><creatorcontrib>Ranieri, Andrea</creatorcontrib><creatorcontrib>Chiarella, Davide</creatorcontrib><creatorcontrib>Zereik, Enrica</creatorcontrib><creatorcontrib>Babić, Anja</creatorcontrib><creatorcontrib>Birk, Andreas</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Chavez, Arturo Gomez</au><au>Ranieri, Andrea</au><au>Chiarella, Davide</au><au>Zereik, Enrica</au><au>Babić, Anja</au><au>Birk, Andreas</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>CADDY Underwater Stereo-Vision Dataset for Human-Robot Interaction (HRI) in the Context of Diver Activities</atitle><date>2018-07-12</date><risdate>2018</risdate><abstract>In this article we present a novel underwater dataset collected from several field trials within the EU FP7 project "Cognitive autonomous diving buddy (CADDY)", where an Autonomous Underwater Vehicle (AUV) was used to interact with divers and monitor their activities. To our knowledge, this is one of the first efforts to collect a large dataset in underwater environments targeting object classification, segmentation and human pose estimation tasks. The first part of the dataset contains stereo camera recordings (~10K) of divers performing hand gestures to communicate and interact with an AUV in different environmental conditions. These gestures samples serve to test the robustness of object detection and classification algorithms against underwater image distortions i.e., color attenuation and light backscatter. The second part includes stereo footage (~12.7K) of divers free-swimming in front of the AUV, along with synchronized IMUs measurements located throughout the diver's suit (DiverNet) which serve as ground-truth for human pose and tracking methods. In both cases, these rectified images allow investigation of 3D representation and reasoning pipelines from low-texture targets commonly present in underwater scenarios. In this paper we describe our recording platform, sensor calibration procedure plus the data format and the utilities provided to use the dataset.</abstract><doi>10.48550/arxiv.1807.04856</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.1807.04856
ispartof
issn
language eng
recordid cdi_arxiv_primary_1807_04856
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
Computer Science - Robotics
title CADDY Underwater Stereo-Vision Dataset for Human-Robot Interaction (HRI) in the Context of Diver Activities
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-03T09%3A27%3A38IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=CADDY%20Underwater%20Stereo-Vision%20Dataset%20for%20Human-Robot%20Interaction%20(HRI)%20in%20the%20Context%20of%20Diver%20Activities&rft.au=Chavez,%20Arturo%20Gomez&rft.date=2018-07-12&rft_id=info:doi/10.48550/arxiv.1807.04856&rft_dat=%3Carxiv_GOX%3E1807_04856%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true