A Deep Reinforcement Learning Approach to Audio-Based Navigation in a Multi-Speaker Environment

In this work we use deep reinforcement learning to create an autonomous agent that can navigate in a two-dimensional space using only raw auditory sensory information from the environment, a problem that has received very little attention in the reinforcement learning literature. Our experiments sho...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2021-05
Hauptverfasser: Giannakopoulos, Petros, Pikrakis, Aggelos, Cotronis, Yannis
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Giannakopoulos, Petros
Pikrakis, Aggelos
Cotronis, Yannis
description In this work we use deep reinforcement learning to create an autonomous agent that can navigate in a two-dimensional space using only raw auditory sensory information from the environment, a problem that has received very little attention in the reinforcement learning literature. Our experiments show that the agent can successfully identify a particular target speaker among a set of \(N\) predefined speakers in a room and move itself towards that speaker, while avoiding collision with other speakers or going outside the room boundaries. The agent is shown to be robust to speaker pitch shifting and it can learn to navigate the environment, even when a limited number of training utterances are available for each speaker.
doi_str_mv 10.48550/arxiv.2105.04488
format Article
fullrecord <record><control><sourceid>proquest_arxiv</sourceid><recordid>TN_cdi_arxiv_primary_2105_04488</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2525272380</sourcerecordid><originalsourceid>FETCH-LOGICAL-a520-10e84ce2d4a914e0d92a93e381d198de31130402c1c9325376b4f8401fa9e5433</originalsourceid><addsrcrecordid>eNotkE9PwkAUxDcmJhLkA3hyE8_Ft_9ge6yIaIKaKPfm2b7iIuzWbUv021vAzGEuk19mhrErAWNtjYFbjD9uP5YCzBi0tvaMDaRSIrFaygs2apoNAMjJVBqjBizP-D1Rzd_I-SrEgnbkW74kjN75Nc_qOgYsPnkbeNaVLiR32FDJX3Dv1ti64LnzHPlzt21d8l4TflHkc793MfgD6pKdV7htaPTvQ7Z6mK9mj8nydfE0y5YJGgmJALK6IFlqTIUmKFOJqSJlRSlSW5ISQoEGWYgiVdKo6eRDV1aDqDAlo5UasusT9rg-r6PbYfzNDy_kxxf6xM0p0Q_67qhp803oou875dL0mkplQf0Bn-xeTg</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2525272380</pqid></control><display><type>article</type><title>A Deep Reinforcement Learning Approach to Audio-Based Navigation in a Multi-Speaker Environment</title><source>Freely Accessible Journals</source><source>arXiv.org</source><creator>Giannakopoulos, Petros ; Pikrakis, Aggelos ; Cotronis, Yannis</creator><creatorcontrib>Giannakopoulos, Petros ; Pikrakis, Aggelos ; Cotronis, Yannis</creatorcontrib><description>In this work we use deep reinforcement learning to create an autonomous agent that can navigate in a two-dimensional space using only raw auditory sensory information from the environment, a problem that has received very little attention in the reinforcement learning literature. Our experiments show that the agent can successfully identify a particular target speaker among a set of \(N\) predefined speakers in a room and move itself towards that speaker, while avoiding collision with other speakers or going outside the room boundaries. The agent is shown to be robust to speaker pitch shifting and it can learn to navigate the environment, even when a limited number of training utterances are available for each speaker.</description><identifier>EISSN: 2331-8422</identifier><identifier>DOI: 10.48550/arxiv.2105.04488</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Audio equipment ; Auditory perception ; Autonomous navigation ; Collision avoidance ; Computer Science - Learning ; Computer Science - Sound ; Deep learning ; Target recognition</subject><ispartof>arXiv.org, 2021-05</ispartof><rights>2021. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,784,885,27916</link.rule.ids><backlink>$$Uhttps://doi.org/10.48550/arXiv.2105.04488$$DView paper in arXiv$$Hfree_for_read</backlink><backlink>$$Uhttps://doi.org/10.1109/ICASSP39728.2021.9415013$$DView published paper (Access to full text may be restricted)$$Hfree_for_read</backlink></links><search><creatorcontrib>Giannakopoulos, Petros</creatorcontrib><creatorcontrib>Pikrakis, Aggelos</creatorcontrib><creatorcontrib>Cotronis, Yannis</creatorcontrib><title>A Deep Reinforcement Learning Approach to Audio-Based Navigation in a Multi-Speaker Environment</title><title>arXiv.org</title><description>In this work we use deep reinforcement learning to create an autonomous agent that can navigate in a two-dimensional space using only raw auditory sensory information from the environment, a problem that has received very little attention in the reinforcement learning literature. Our experiments show that the agent can successfully identify a particular target speaker among a set of \(N\) predefined speakers in a room and move itself towards that speaker, while avoiding collision with other speakers or going outside the room boundaries. The agent is shown to be robust to speaker pitch shifting and it can learn to navigate the environment, even when a limited number of training utterances are available for each speaker.</description><subject>Audio equipment</subject><subject>Auditory perception</subject><subject>Autonomous navigation</subject><subject>Collision avoidance</subject><subject>Computer Science - Learning</subject><subject>Computer Science - Sound</subject><subject>Deep learning</subject><subject>Target recognition</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GOX</sourceid><recordid>eNotkE9PwkAUxDcmJhLkA3hyE8_Ft_9ge6yIaIKaKPfm2b7iIuzWbUv021vAzGEuk19mhrErAWNtjYFbjD9uP5YCzBi0tvaMDaRSIrFaygs2apoNAMjJVBqjBizP-D1Rzd_I-SrEgnbkW74kjN75Nc_qOgYsPnkbeNaVLiR32FDJX3Dv1ti64LnzHPlzt21d8l4TflHkc793MfgD6pKdV7htaPTvQ7Z6mK9mj8nydfE0y5YJGgmJALK6IFlqTIUmKFOJqSJlRSlSW5ISQoEGWYgiVdKo6eRDV1aDqDAlo5UasusT9rg-r6PbYfzNDy_kxxf6xM0p0Q_67qhp803oou875dL0mkplQf0Bn-xeTg</recordid><startdate>20210510</startdate><enddate>20210510</enddate><creator>Giannakopoulos, Petros</creator><creator>Pikrakis, Aggelos</creator><creator>Cotronis, Yannis</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20210510</creationdate><title>A Deep Reinforcement Learning Approach to Audio-Based Navigation in a Multi-Speaker Environment</title><author>Giannakopoulos, Petros ; Pikrakis, Aggelos ; Cotronis, Yannis</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a520-10e84ce2d4a914e0d92a93e381d198de31130402c1c9325376b4f8401fa9e5433</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Audio equipment</topic><topic>Auditory perception</topic><topic>Autonomous navigation</topic><topic>Collision avoidance</topic><topic>Computer Science - Learning</topic><topic>Computer Science - Sound</topic><topic>Deep learning</topic><topic>Target recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Giannakopoulos, Petros</creatorcontrib><creatorcontrib>Pikrakis, Aggelos</creatorcontrib><creatorcontrib>Cotronis, Yannis</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><collection>arXiv Computer Science</collection><collection>arXiv.org</collection><jtitle>arXiv.org</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Giannakopoulos, Petros</au><au>Pikrakis, Aggelos</au><au>Cotronis, Yannis</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>A Deep Reinforcement Learning Approach to Audio-Based Navigation in a Multi-Speaker Environment</atitle><jtitle>arXiv.org</jtitle><date>2021-05-10</date><risdate>2021</risdate><eissn>2331-8422</eissn><abstract>In this work we use deep reinforcement learning to create an autonomous agent that can navigate in a two-dimensional space using only raw auditory sensory information from the environment, a problem that has received very little attention in the reinforcement learning literature. Our experiments show that the agent can successfully identify a particular target speaker among a set of \(N\) predefined speakers in a room and move itself towards that speaker, while avoiding collision with other speakers or going outside the room boundaries. The agent is shown to be robust to speaker pitch shifting and it can learn to navigate the environment, even when a limited number of training utterances are available for each speaker.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><doi>10.48550/arxiv.2105.04488</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2021-05
issn 2331-8422
language eng
recordid cdi_arxiv_primary_2105_04488
source Freely Accessible Journals; arXiv.org
subjects Audio equipment
Auditory perception
Autonomous navigation
Collision avoidance
Computer Science - Learning
Computer Science - Sound
Deep learning
Target recognition
title A Deep Reinforcement Learning Approach to Audio-Based Navigation in a Multi-Speaker Environment
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-15T04%3A37%3A41IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_arxiv&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=A%20Deep%20Reinforcement%20Learning%20Approach%20to%20Audio-Based%20Navigation%20in%20a%20Multi-Speaker%20Environment&rft.jtitle=arXiv.org&rft.au=Giannakopoulos,%20Petros&rft.date=2021-05-10&rft.eissn=2331-8422&rft_id=info:doi/10.48550/arxiv.2105.04488&rft_dat=%3Cproquest_arxiv%3E2525272380%3C/proquest_arxiv%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2525272380&rft_id=info:pmid/&rfr_iscdi=true