Effect of display of YOLO’s object recognition results to HMD for an operator controlling a mobile robot

An operator feels a burden when he/she controls a rescue robot remotely because he/she has to keep watching camera images to find the target object. We think that this burden can be reduced by the combination of Head Mounted Display (HMD) and object recognition by deep learning. In the first half pa...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Artificial life and robotics 2023-05, Vol.28 (2), p.323-331
Hauptverfasser: Sasaki, Yuichi, Kamegawa, Tetsushi, Gofuku, Akio
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 331
container_issue 2
container_start_page 323
container_title Artificial life and robotics
container_volume 28
creator Sasaki, Yuichi
Kamegawa, Tetsushi
Gofuku, Akio
description An operator feels a burden when he/she controls a rescue robot remotely because he/she has to keep watching camera images to find the target object. We think that this burden can be reduced by the combination of Head Mounted Display (HMD) and object recognition by deep learning. In the first half part of this study, we examine the effect that how presentation method by You Only Look Once (YOLO), a deep learning algorithm, and its recognition results to an operator wearing HMD. In the experiment, three methods of presentation were set: no display of object recognition, display only one object recognition result, and display 80 kinds of object recognition results. Under each presentation method, we measured the time it took for the operator to operate the robot and complete the given task. Additionally, we ask a questionnaire for each experiment. The results of the questionnaire showed that the method to present only one object recognition result was useful. In the second half part of this study, we develop a system to present 3D images with YOLO added, to further ease the burden of object search. Furthermore, we numerically prove that this system represents depth. In the experiment, two methods of displaying were set up: 2D images with Bounding Box (BB) by YOLO and 3D images with BB by YOLO. For each method of presentation, the operator operated the robot and recorded the number of objects found within a time limit. Additionally, we asked a questionnaire at the end of the search in each condition and at the end of all the experiments. The results of the questionnaire suggested points that need to be improved. Furthermore, we consider the flicker of the image found in the experiment.
doi_str_mv 10.1007/s10015-023-00856-0
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2806510838</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2806510838</sourcerecordid><originalsourceid>FETCH-LOGICAL-c363t-2b9cdd65bcc57c89607ef5d1089cd10860958c7696f2fe8f7f11e1ad3146b83</originalsourceid><addsrcrecordid>eNp9UMtOwzAQtBBIlMIPcLLE2bCOY8c5ovIoUlEPcOFkJY5dJUrjYLuH3vgNfo8vwaFI3LjMjnZnZqVB6JLCNQUobkJCyglkjABILggcoRkVNCdFzsVx4jljhGelPEVnIXQAeQGCzVB3b63RETuLmzaMfbWf6Nt6tf76-AzY1d109Ua7zdDG1g2Jh10fA44OL5_vsHUeVwN2o_FVTFy7IXrX9-2wwRXeurrtDfaudvEcndiqD-bid87Ry8P962JJVuvHp8XtimgmWCRZXeqmEbzWmhdalgIKY3lDQaZ9QgEll7oQpbCZNdIWllJDq4bRXNSSzdHVIXX07n1nQlSd2_khPVSZBMFTAptU2UGlvQvBG6tG324rv1cU1NSoOjSqUqPqp1EFycQOppDEw8b4v-h_XN-Cs3oZ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2806510838</pqid></control><display><type>article</type><title>Effect of display of YOLO’s object recognition results to HMD for an operator controlling a mobile robot</title><source>SpringerLink Journals - AutoHoldings</source><creator>Sasaki, Yuichi ; Kamegawa, Tetsushi ; Gofuku, Akio</creator><creatorcontrib>Sasaki, Yuichi ; Kamegawa, Tetsushi ; Gofuku, Akio</creatorcontrib><description>An operator feels a burden when he/she controls a rescue robot remotely because he/she has to keep watching camera images to find the target object. We think that this burden can be reduced by the combination of Head Mounted Display (HMD) and object recognition by deep learning. In the first half part of this study, we examine the effect that how presentation method by You Only Look Once (YOLO), a deep learning algorithm, and its recognition results to an operator wearing HMD. In the experiment, three methods of presentation were set: no display of object recognition, display only one object recognition result, and display 80 kinds of object recognition results. Under each presentation method, we measured the time it took for the operator to operate the robot and complete the given task. Additionally, we ask a questionnaire for each experiment. The results of the questionnaire showed that the method to present only one object recognition result was useful. In the second half part of this study, we develop a system to present 3D images with YOLO added, to further ease the burden of object search. Furthermore, we numerically prove that this system represents depth. In the experiment, two methods of displaying were set up: 2D images with Bounding Box (BB) by YOLO and 3D images with BB by YOLO. For each method of presentation, the operator operated the robot and recorded the number of objects found within a time limit. Additionally, we asked a questionnaire at the end of the search in each condition and at the end of all the experiments. The results of the questionnaire suggested points that need to be improved. Furthermore, we consider the flicker of the image found in the experiment.</description><identifier>ISSN: 1433-5298</identifier><identifier>EISSN: 1614-7456</identifier><identifier>DOI: 10.1007/s10015-023-00856-0</identifier><language>eng</language><publisher>Tokyo: Springer Japan</publisher><subject>Algorithms ; Artificial Intelligence ; Computation by Abstract Devices ; Computer Science ; Control ; Deep learning ; Experiments ; Helmet mounted displays ; Machine learning ; Mechatronics ; Object recognition ; Original Article ; Questionnaires ; Robotics ; Robots ; Three dimensional imaging ; Time measurement</subject><ispartof>Artificial life and robotics, 2023-05, Vol.28 (2), p.323-331</ispartof><rights>The Author(s) 2023</rights><rights>The Author(s) 2023. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c363t-2b9cdd65bcc57c89607ef5d1089cd10860958c7696f2fe8f7f11e1ad3146b83</citedby><cites>FETCH-LOGICAL-c363t-2b9cdd65bcc57c89607ef5d1089cd10860958c7696f2fe8f7f11e1ad3146b83</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s10015-023-00856-0$$EPDF$$P50$$Gspringer$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s10015-023-00856-0$$EHTML$$P50$$Gspringer$$Hfree_for_read</linktohtml><link.rule.ids>314,776,780,27901,27902,41464,42533,51294</link.rule.ids></links><search><creatorcontrib>Sasaki, Yuichi</creatorcontrib><creatorcontrib>Kamegawa, Tetsushi</creatorcontrib><creatorcontrib>Gofuku, Akio</creatorcontrib><title>Effect of display of YOLO’s object recognition results to HMD for an operator controlling a mobile robot</title><title>Artificial life and robotics</title><addtitle>Artif Life Robotics</addtitle><description>An operator feels a burden when he/she controls a rescue robot remotely because he/she has to keep watching camera images to find the target object. We think that this burden can be reduced by the combination of Head Mounted Display (HMD) and object recognition by deep learning. In the first half part of this study, we examine the effect that how presentation method by You Only Look Once (YOLO), a deep learning algorithm, and its recognition results to an operator wearing HMD. In the experiment, three methods of presentation were set: no display of object recognition, display only one object recognition result, and display 80 kinds of object recognition results. Under each presentation method, we measured the time it took for the operator to operate the robot and complete the given task. Additionally, we ask a questionnaire for each experiment. The results of the questionnaire showed that the method to present only one object recognition result was useful. In the second half part of this study, we develop a system to present 3D images with YOLO added, to further ease the burden of object search. Furthermore, we numerically prove that this system represents depth. In the experiment, two methods of displaying were set up: 2D images with Bounding Box (BB) by YOLO and 3D images with BB by YOLO. For each method of presentation, the operator operated the robot and recorded the number of objects found within a time limit. Additionally, we asked a questionnaire at the end of the search in each condition and at the end of all the experiments. The results of the questionnaire suggested points that need to be improved. Furthermore, we consider the flicker of the image found in the experiment.</description><subject>Algorithms</subject><subject>Artificial Intelligence</subject><subject>Computation by Abstract Devices</subject><subject>Computer Science</subject><subject>Control</subject><subject>Deep learning</subject><subject>Experiments</subject><subject>Helmet mounted displays</subject><subject>Machine learning</subject><subject>Mechatronics</subject><subject>Object recognition</subject><subject>Original Article</subject><subject>Questionnaires</subject><subject>Robotics</subject><subject>Robots</subject><subject>Three dimensional imaging</subject><subject>Time measurement</subject><issn>1433-5298</issn><issn>1614-7456</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>C6C</sourceid><recordid>eNp9UMtOwzAQtBBIlMIPcLLE2bCOY8c5ovIoUlEPcOFkJY5dJUrjYLuH3vgNfo8vwaFI3LjMjnZnZqVB6JLCNQUobkJCyglkjABILggcoRkVNCdFzsVx4jljhGelPEVnIXQAeQGCzVB3b63RETuLmzaMfbWf6Nt6tf76-AzY1d109Ua7zdDG1g2Jh10fA44OL5_vsHUeVwN2o_FVTFy7IXrX9-2wwRXeurrtDfaudvEcndiqD-bid87Ry8P962JJVuvHp8XtimgmWCRZXeqmEbzWmhdalgIKY3lDQaZ9QgEll7oQpbCZNdIWllJDq4bRXNSSzdHVIXX07n1nQlSd2_khPVSZBMFTAptU2UGlvQvBG6tG324rv1cU1NSoOjSqUqPqp1EFycQOppDEw8b4v-h_XN-Cs3oZ</recordid><startdate>20230501</startdate><enddate>20230501</enddate><creator>Sasaki, Yuichi</creator><creator>Kamegawa, Tetsushi</creator><creator>Gofuku, Akio</creator><general>Springer Japan</general><general>Springer Nature B.V</general><scope>C6C</scope><scope>AAYXX</scope><scope>CITATION</scope></search><sort><creationdate>20230501</creationdate><title>Effect of display of YOLO’s object recognition results to HMD for an operator controlling a mobile robot</title><author>Sasaki, Yuichi ; Kamegawa, Tetsushi ; Gofuku, Akio</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c363t-2b9cdd65bcc57c89607ef5d1089cd10860958c7696f2fe8f7f11e1ad3146b83</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Algorithms</topic><topic>Artificial Intelligence</topic><topic>Computation by Abstract Devices</topic><topic>Computer Science</topic><topic>Control</topic><topic>Deep learning</topic><topic>Experiments</topic><topic>Helmet mounted displays</topic><topic>Machine learning</topic><topic>Mechatronics</topic><topic>Object recognition</topic><topic>Original Article</topic><topic>Questionnaires</topic><topic>Robotics</topic><topic>Robots</topic><topic>Three dimensional imaging</topic><topic>Time measurement</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Sasaki, Yuichi</creatorcontrib><creatorcontrib>Kamegawa, Tetsushi</creatorcontrib><creatorcontrib>Gofuku, Akio</creatorcontrib><collection>Springer Open Access</collection><collection>CrossRef</collection><jtitle>Artificial life and robotics</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Sasaki, Yuichi</au><au>Kamegawa, Tetsushi</au><au>Gofuku, Akio</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Effect of display of YOLO’s object recognition results to HMD for an operator controlling a mobile robot</atitle><jtitle>Artificial life and robotics</jtitle><stitle>Artif Life Robotics</stitle><date>2023-05-01</date><risdate>2023</risdate><volume>28</volume><issue>2</issue><spage>323</spage><epage>331</epage><pages>323-331</pages><issn>1433-5298</issn><eissn>1614-7456</eissn><abstract>An operator feels a burden when he/she controls a rescue robot remotely because he/she has to keep watching camera images to find the target object. We think that this burden can be reduced by the combination of Head Mounted Display (HMD) and object recognition by deep learning. In the first half part of this study, we examine the effect that how presentation method by You Only Look Once (YOLO), a deep learning algorithm, and its recognition results to an operator wearing HMD. In the experiment, three methods of presentation were set: no display of object recognition, display only one object recognition result, and display 80 kinds of object recognition results. Under each presentation method, we measured the time it took for the operator to operate the robot and complete the given task. Additionally, we ask a questionnaire for each experiment. The results of the questionnaire showed that the method to present only one object recognition result was useful. In the second half part of this study, we develop a system to present 3D images with YOLO added, to further ease the burden of object search. Furthermore, we numerically prove that this system represents depth. In the experiment, two methods of displaying were set up: 2D images with Bounding Box (BB) by YOLO and 3D images with BB by YOLO. For each method of presentation, the operator operated the robot and recorded the number of objects found within a time limit. Additionally, we asked a questionnaire at the end of the search in each condition and at the end of all the experiments. The results of the questionnaire suggested points that need to be improved. Furthermore, we consider the flicker of the image found in the experiment.</abstract><cop>Tokyo</cop><pub>Springer Japan</pub><doi>10.1007/s10015-023-00856-0</doi><tpages>9</tpages><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 1433-5298
ispartof Artificial life and robotics, 2023-05, Vol.28 (2), p.323-331
issn 1433-5298
1614-7456
language eng
recordid cdi_proquest_journals_2806510838
source SpringerLink Journals - AutoHoldings
subjects Algorithms
Artificial Intelligence
Computation by Abstract Devices
Computer Science
Control
Deep learning
Experiments
Helmet mounted displays
Machine learning
Mechatronics
Object recognition
Original Article
Questionnaires
Robotics
Robots
Three dimensional imaging
Time measurement
title Effect of display of YOLO’s object recognition results to HMD for an operator controlling a mobile robot
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-07T17%3A46%3A17IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Effect%20of%20display%20of%20YOLO%E2%80%99s%20object%20recognition%20results%20to%20HMD%20for%20an%20operator%20controlling%20a%20mobile%20robot&rft.jtitle=Artificial%20life%20and%20robotics&rft.au=Sasaki,%20Yuichi&rft.date=2023-05-01&rft.volume=28&rft.issue=2&rft.spage=323&rft.epage=331&rft.pages=323-331&rft.issn=1433-5298&rft.eissn=1614-7456&rft_id=info:doi/10.1007/s10015-023-00856-0&rft_dat=%3Cproquest_cross%3E2806510838%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2806510838&rft_id=info:pmid/&rfr_iscdi=true