Artificial Ear for Robots
Robots with sound localization capability have been developed mostly relying on machine vision or an array of more than two microphones. Humans can localize a sound source hidden to the eye and distinguish whether a sound is coming from front or rear, above or below with only two ears. This is mainl...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Tagungsbericht |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 1463 |
---|---|
container_issue | |
container_start_page | 1460 |
container_title | |
container_volume | |
creator | Sungmok Hwang Ki-Hoon Shin Youngjin Park |
description | Robots with sound localization capability have been developed mostly relying on machine vision or an array of more than two microphones. Humans can localize a sound source hidden to the eye and distinguish whether a sound is coming from front or rear, above or below with only two ears. This is mainly due to the complex shape of the pinna. In particular, reflections off the posterior wall of the concha produce spectral notches at different frequencies in the head-related transfer function (HRTF) as the sound source shifts in position. From the non-individualized HRTFs measured with a B&K HATS (head and torso simulator), we were able to confirm the relationship between the spectral notches and the geometry of the concha. Based on the observation of the HATS' pinna and resulting HRTFs, a novel design of artificial ear that can be mounted on a robot head is proposed in order to explore the possibility of developing sound localization sensors using only two microphones. Experimental results using a designed artificial ear show that the spectral notches are distinctively changed with respect to the elevation in the frontal region, whereas they disappear in the rear. In view of that result, it is expected to pinpoint a sound direction in 3-D space using only two microphones and the designed artificial ear can be a suitable mechanical sensor for sound source localization. |
doi_str_mv | 10.1109/ICSENS.2007.355909 |
format | Conference Proceeding |
fullrecord | <record><control><sourceid>ieee_6IE</sourceid><recordid>TN_cdi_ieee_primary_4178904</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>4178904</ieee_id><sourcerecordid>4178904</sourcerecordid><originalsourceid>FETCH-LOGICAL-i175t-86191a48c119d26d72b6f27d2cc7528e8be3d9dc9b675fa76ab813d6584052243</originalsourceid><addsrcrecordid>eNo1js1Kw0AURsc_MK19AHWTF5h4753fuywhaqEoWF2XSWYCI5VIko1vb0G7-hYHzvmEuEOoEIEfNvWuedlVBOAqZQwDn4kVO4-atAblLJ-LgtB6yUR8IRYnYPylKJAVSFBsrsVimj4BCAz5Qtyuxzn3ucvhUDZhLPthLN-GdpinG3HVh8OUVv-7FB-PzXv9LLevT5t6vZUZnZmlt8gYtO8QOZKNjlrbk4vUde5YSL5NKnLsuLXO9MHZ0HpU0Rqvjw9Iq6W4__PmlNL-e8xfYfzZa3SeQatfmIc-YA</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>Artificial Ear for Robots</title><source>IEEE Electronic Library (IEL) Conference Proceedings</source><creator>Sungmok Hwang ; Ki-Hoon Shin ; Youngjin Park</creator><creatorcontrib>Sungmok Hwang ; Ki-Hoon Shin ; Youngjin Park</creatorcontrib><description>Robots with sound localization capability have been developed mostly relying on machine vision or an array of more than two microphones. Humans can localize a sound source hidden to the eye and distinguish whether a sound is coming from front or rear, above or below with only two ears. This is mainly due to the complex shape of the pinna. In particular, reflections off the posterior wall of the concha produce spectral notches at different frequencies in the head-related transfer function (HRTF) as the sound source shifts in position. From the non-individualized HRTFs measured with a B&K HATS (head and torso simulator), we were able to confirm the relationship between the spectral notches and the geometry of the concha. Based on the observation of the HATS' pinna and resulting HRTFs, a novel design of artificial ear that can be mounted on a robot head is proposed in order to explore the possibility of developing sound localization sensors using only two microphones. Experimental results using a designed artificial ear show that the spectral notches are distinctively changed with respect to the elevation in the frontal region, whereas they disappear in the rear. In view of that result, it is expected to pinpoint a sound direction in 3-D space using only two microphones and the designed artificial ear can be a suitable mechanical sensor for sound source localization.</description><identifier>ISSN: 1930-0395</identifier><identifier>ISBN: 1424403758</identifier><identifier>ISBN: 9781424403752</identifier><identifier>EISSN: 2168-9229</identifier><identifier>EISBN: 9781424403769</identifier><identifier>EISBN: 1424403766</identifier><identifier>DOI: 10.1109/ICSENS.2007.355909</identifier><language>eng</language><publisher>IEEE</publisher><subject>Acoustic reflection ; Ear ; Frequency ; Humans ; Machine vision ; Magnetic heads ; Microphone arrays ; Robot vision systems ; Shape ; Transfer functions</subject><ispartof>2006 5th IEEE Conference on Sensors, 2006, p.1460-1463</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/4178904$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,777,781,786,787,2052,27906,54901</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/4178904$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Sungmok Hwang</creatorcontrib><creatorcontrib>Ki-Hoon Shin</creatorcontrib><creatorcontrib>Youngjin Park</creatorcontrib><title>Artificial Ear for Robots</title><title>2006 5th IEEE Conference on Sensors</title><addtitle>ICSENS</addtitle><description>Robots with sound localization capability have been developed mostly relying on machine vision or an array of more than two microphones. Humans can localize a sound source hidden to the eye and distinguish whether a sound is coming from front or rear, above or below with only two ears. This is mainly due to the complex shape of the pinna. In particular, reflections off the posterior wall of the concha produce spectral notches at different frequencies in the head-related transfer function (HRTF) as the sound source shifts in position. From the non-individualized HRTFs measured with a B&K HATS (head and torso simulator), we were able to confirm the relationship between the spectral notches and the geometry of the concha. Based on the observation of the HATS' pinna and resulting HRTFs, a novel design of artificial ear that can be mounted on a robot head is proposed in order to explore the possibility of developing sound localization sensors using only two microphones. Experimental results using a designed artificial ear show that the spectral notches are distinctively changed with respect to the elevation in the frontal region, whereas they disappear in the rear. In view of that result, it is expected to pinpoint a sound direction in 3-D space using only two microphones and the designed artificial ear can be a suitable mechanical sensor for sound source localization.</description><subject>Acoustic reflection</subject><subject>Ear</subject><subject>Frequency</subject><subject>Humans</subject><subject>Machine vision</subject><subject>Magnetic heads</subject><subject>Microphone arrays</subject><subject>Robot vision systems</subject><subject>Shape</subject><subject>Transfer functions</subject><issn>1930-0395</issn><issn>2168-9229</issn><isbn>1424403758</isbn><isbn>9781424403752</isbn><isbn>9781424403769</isbn><isbn>1424403766</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2006</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><sourceid>RIE</sourceid><recordid>eNo1js1Kw0AURsc_MK19AHWTF5h4753fuywhaqEoWF2XSWYCI5VIko1vb0G7-hYHzvmEuEOoEIEfNvWuedlVBOAqZQwDn4kVO4-atAblLJ-LgtB6yUR8IRYnYPylKJAVSFBsrsVimj4BCAz5Qtyuxzn3ucvhUDZhLPthLN-GdpinG3HVh8OUVv-7FB-PzXv9LLevT5t6vZUZnZmlt8gYtO8QOZKNjlrbk4vUde5YSL5NKnLsuLXO9MHZ0HpU0Rqvjw9Iq6W4__PmlNL-e8xfYfzZa3SeQatfmIc-YA</recordid><startdate>200610</startdate><enddate>200610</enddate><creator>Sungmok Hwang</creator><creator>Ki-Hoon Shin</creator><creator>Youngjin Park</creator><general>IEEE</general><scope>6IE</scope><scope>6IH</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIO</scope></search><sort><creationdate>200610</creationdate><title>Artificial Ear for Robots</title><author>Sungmok Hwang ; Ki-Hoon Shin ; Youngjin Park</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-i175t-86191a48c119d26d72b6f27d2cc7528e8be3d9dc9b675fa76ab813d6584052243</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2006</creationdate><topic>Acoustic reflection</topic><topic>Ear</topic><topic>Frequency</topic><topic>Humans</topic><topic>Machine vision</topic><topic>Magnetic heads</topic><topic>Microphone arrays</topic><topic>Robot vision systems</topic><topic>Shape</topic><topic>Transfer functions</topic><toplevel>online_resources</toplevel><creatorcontrib>Sungmok Hwang</creatorcontrib><creatorcontrib>Ki-Hoon Shin</creatorcontrib><creatorcontrib>Youngjin Park</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan (POP) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE Electronic Library (IEL)</collection><collection>IEEE Proceedings Order Plans (POP) 1998-present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Sungmok Hwang</au><au>Ki-Hoon Shin</au><au>Youngjin Park</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Artificial Ear for Robots</atitle><btitle>2006 5th IEEE Conference on Sensors</btitle><stitle>ICSENS</stitle><date>2006-10</date><risdate>2006</risdate><spage>1460</spage><epage>1463</epage><pages>1460-1463</pages><issn>1930-0395</issn><eissn>2168-9229</eissn><isbn>1424403758</isbn><isbn>9781424403752</isbn><eisbn>9781424403769</eisbn><eisbn>1424403766</eisbn><abstract>Robots with sound localization capability have been developed mostly relying on machine vision or an array of more than two microphones. Humans can localize a sound source hidden to the eye and distinguish whether a sound is coming from front or rear, above or below with only two ears. This is mainly due to the complex shape of the pinna. In particular, reflections off the posterior wall of the concha produce spectral notches at different frequencies in the head-related transfer function (HRTF) as the sound source shifts in position. From the non-individualized HRTFs measured with a B&K HATS (head and torso simulator), we were able to confirm the relationship between the spectral notches and the geometry of the concha. Based on the observation of the HATS' pinna and resulting HRTFs, a novel design of artificial ear that can be mounted on a robot head is proposed in order to explore the possibility of developing sound localization sensors using only two microphones. Experimental results using a designed artificial ear show that the spectral notches are distinctively changed with respect to the elevation in the frontal region, whereas they disappear in the rear. In view of that result, it is expected to pinpoint a sound direction in 3-D space using only two microphones and the designed artificial ear can be a suitable mechanical sensor for sound source localization.</abstract><pub>IEEE</pub><doi>10.1109/ICSENS.2007.355909</doi><tpages>4</tpages></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 1930-0395 |
ispartof | 2006 5th IEEE Conference on Sensors, 2006, p.1460-1463 |
issn | 1930-0395 2168-9229 |
language | eng |
recordid | cdi_ieee_primary_4178904 |
source | IEEE Electronic Library (IEL) Conference Proceedings |
subjects | Acoustic reflection Ear Frequency Humans Machine vision Magnetic heads Microphone arrays Robot vision systems Shape Transfer functions |
title | Artificial Ear for Robots |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-19T05%3A02%3A02IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_6IE&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Artificial%20Ear%20for%20Robots&rft.btitle=2006%205th%20IEEE%20Conference%20on%20Sensors&rft.au=Sungmok%20Hwang&rft.date=2006-10&rft.spage=1460&rft.epage=1463&rft.pages=1460-1463&rft.issn=1930-0395&rft.eissn=2168-9229&rft.isbn=1424403758&rft.isbn_list=9781424403752&rft_id=info:doi/10.1109/ICSENS.2007.355909&rft_dat=%3Cieee_6IE%3E4178904%3C/ieee_6IE%3E%3Curl%3E%3C/url%3E&rft.eisbn=9781424403769&rft.eisbn_list=1424403766&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=4178904&rfr_iscdi=true |