Self-localizing dynamic microphone arrays

This paper introduces a mechanism for localizing a microphone array when the location of sound sources in the environment is known. Using the proposed spatial observability function based microphone array integration technique, a maximum likelihood estimator for the correct position and orientation...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on human-machine systems 2002-11, Vol.32 (4), p.474-484
1. Verfasser: Aarabi, P.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 484
container_issue 4
container_start_page 474
container_title IEEE transactions on human-machine systems
container_volume 32
creator Aarabi, P.
description This paper introduces a mechanism for localizing a microphone array when the location of sound sources in the environment is known. Using the proposed spatial observability function based microphone array integration technique, a maximum likelihood estimator for the correct position and orientation of the array is derived. This is used to localize and track a microphone array with a known and fixed geometrical structure, which can be viewed as the inverse sound localization problem. Simulations using a two-element dynamic microphone array illustrate the ability of the proposed technique to correctly localize and estimate the orientation of the array even in a very reverberant environment. Using 1 s male speech segments from three speakers in a 7 m by 6 m by 2.5 m simulated environment, a 30 cm inter-microphone distance, and PHAT histogram SLF generation, the average localization error was approximately 3 cm with an average orientation error of 19/spl deg/. The same simulation configuration but with 4 s speech segments results in an average localization error less than 1cm, with an average orientation error of approximately 2/spl deg/. Experimental examples illustrate localizations for both stationary and dynamic microphone pairs.
doi_str_mv 10.1109/TSMCB.2002.804369
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_ieee_primary_1176896</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>1176896</ieee_id><sourcerecordid>2430658311</sourcerecordid><originalsourceid>FETCH-LOGICAL-c414t-efac205d1382f64de778221c94405a842a58b7a08f73a0fcaaf0d9b10abf307d3</originalsourceid><addsrcrecordid>eNqF0UtLAzEQB_BFFKyPDyBeiqDiYeskmc3jqMUXVDy0nsN0N9GV7W5N7KF-elMrFDzoISSQ3wzM_LPsiMGAMTCXk_Hj8HrAAfhAAwpptrIeKwqdc0S-nd5gMJdGqd1sL8Y3AIZoRC-7GLvG501XUlN_1u1Lv1q2NKvLfjqhm792retTCLSMB9mOpya6w597P3u-vZkM7_PR093D8GqUl8jwI3eeSg5FxYTmXmLllNKcs9IgQkEaORV6qgi0V4LAl0QeKjNlQFMvQFViPztf952H7n3h4oed1bF0TUOt6xbRGlBGCslVkmd_Sm5AyEKI_6GWDEBggie_4Fu3CG0a12qNqDgKnhBbo7SgGIPzdh7qGYWlZWBXYdjvMOwqDLsOI9Wc_jSmmFbtA7VlHTeFKKXkUiZ3vHa1c27zzZTUaegv0EGQiA</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>884472432</pqid></control><display><type>article</type><title>Self-localizing dynamic microphone arrays</title><source>IEEE Electronic Library (IEL)</source><creator>Aarabi, P.</creator><creatorcontrib>Aarabi, P.</creatorcontrib><description>This paper introduces a mechanism for localizing a microphone array when the location of sound sources in the environment is known. Using the proposed spatial observability function based microphone array integration technique, a maximum likelihood estimator for the correct position and orientation of the array is derived. This is used to localize and track a microphone array with a known and fixed geometrical structure, which can be viewed as the inverse sound localization problem. Simulations using a two-element dynamic microphone array illustrate the ability of the proposed technique to correctly localize and estimate the orientation of the array even in a very reverberant environment. Using 1 s male speech segments from three speakers in a 7 m by 6 m by 2.5 m simulated environment, a 30 cm inter-microphone distance, and PHAT histogram SLF generation, the average localization error was approximately 3 cm with an average orientation error of 19/spl deg/. The same simulation configuration but with 4 s speech segments results in an average localization error less than 1cm, with an average orientation error of approximately 2/spl deg/. Experimental examples illustrate localizations for both stationary and dynamic microphone pairs.</description><identifier>ISSN: 1094-6977</identifier><identifier>ISSN: 2168-2291</identifier><identifier>EISSN: 1558-2442</identifier><identifier>EISSN: 2168-2305</identifier><identifier>DOI: 10.1109/TSMCB.2002.804369</identifier><identifier>CODEN: ITCRFH</identifier><language>eng</language><publisher>New-York, NY: IEEE</publisher><subject>Acoustic sensors ; Applied sciences ; Arrays ; Computer science; control theory; systems ; Control theory. Systems ; Costs ; Dynamics ; Errors ; Exact sciences and technology ; Histograms ; Localization ; Loudspeakers ; Maximum likelihood estimation ; Microphone arrays ; Microphones ; Miscellaneous ; Observability ; Orientation ; Position (location) ; Sensor arrays ; Sensor fusion ; Simulation ; Speech</subject><ispartof>IEEE transactions on human-machine systems, 2002-11, Vol.32 (4), p.474-484</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2002</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c414t-efac205d1382f64de778221c94405a842a58b7a08f73a0fcaaf0d9b10abf307d3</citedby><cites>FETCH-LOGICAL-c414t-efac205d1382f64de778221c94405a842a58b7a08f73a0fcaaf0d9b10abf307d3</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/1176896$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,27924,27925,54758</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/1176896$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc><backlink>$$Uhttp://pascal-francis.inist.fr/vibad/index.php?action=getRecordDetail&amp;idt=14666266$$DView record in Pascal Francis$$Hfree_for_read</backlink></links><search><creatorcontrib>Aarabi, P.</creatorcontrib><title>Self-localizing dynamic microphone arrays</title><title>IEEE transactions on human-machine systems</title><addtitle>TSMCC</addtitle><description>This paper introduces a mechanism for localizing a microphone array when the location of sound sources in the environment is known. Using the proposed spatial observability function based microphone array integration technique, a maximum likelihood estimator for the correct position and orientation of the array is derived. This is used to localize and track a microphone array with a known and fixed geometrical structure, which can be viewed as the inverse sound localization problem. Simulations using a two-element dynamic microphone array illustrate the ability of the proposed technique to correctly localize and estimate the orientation of the array even in a very reverberant environment. Using 1 s male speech segments from three speakers in a 7 m by 6 m by 2.5 m simulated environment, a 30 cm inter-microphone distance, and PHAT histogram SLF generation, the average localization error was approximately 3 cm with an average orientation error of 19/spl deg/. The same simulation configuration but with 4 s speech segments results in an average localization error less than 1cm, with an average orientation error of approximately 2/spl deg/. Experimental examples illustrate localizations for both stationary and dynamic microphone pairs.</description><subject>Acoustic sensors</subject><subject>Applied sciences</subject><subject>Arrays</subject><subject>Computer science; control theory; systems</subject><subject>Control theory. Systems</subject><subject>Costs</subject><subject>Dynamics</subject><subject>Errors</subject><subject>Exact sciences and technology</subject><subject>Histograms</subject><subject>Localization</subject><subject>Loudspeakers</subject><subject>Maximum likelihood estimation</subject><subject>Microphone arrays</subject><subject>Microphones</subject><subject>Miscellaneous</subject><subject>Observability</subject><subject>Orientation</subject><subject>Position (location)</subject><subject>Sensor arrays</subject><subject>Sensor fusion</subject><subject>Simulation</subject><subject>Speech</subject><issn>1094-6977</issn><issn>2168-2291</issn><issn>1558-2442</issn><issn>2168-2305</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2002</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNqF0UtLAzEQB_BFFKyPDyBeiqDiYeskmc3jqMUXVDy0nsN0N9GV7W5N7KF-elMrFDzoISSQ3wzM_LPsiMGAMTCXk_Hj8HrAAfhAAwpptrIeKwqdc0S-nd5gMJdGqd1sL8Y3AIZoRC-7GLvG501XUlN_1u1Lv1q2NKvLfjqhm792retTCLSMB9mOpya6w597P3u-vZkM7_PR093D8GqUl8jwI3eeSg5FxYTmXmLllNKcs9IgQkEaORV6qgi0V4LAl0QeKjNlQFMvQFViPztf952H7n3h4oed1bF0TUOt6xbRGlBGCslVkmd_Sm5AyEKI_6GWDEBggie_4Fu3CG0a12qNqDgKnhBbo7SgGIPzdh7qGYWlZWBXYdjvMOwqDLsOI9Wc_jSmmFbtA7VlHTeFKKXkUiZ3vHa1c27zzZTUaegv0EGQiA</recordid><startdate>20021101</startdate><enddate>20021101</enddate><creator>Aarabi, P.</creator><general>IEEE</general><general>Institute of Electrical and Electronics Engineers</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>RIA</scope><scope>RIE</scope><scope>IQODW</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>7TB</scope><scope>8FD</scope><scope>FR3</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>H8D</scope><scope>F28</scope></search><sort><creationdate>20021101</creationdate><title>Self-localizing dynamic microphone arrays</title><author>Aarabi, P.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c414t-efac205d1382f64de778221c94405a842a58b7a08f73a0fcaaf0d9b10abf307d3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2002</creationdate><topic>Acoustic sensors</topic><topic>Applied sciences</topic><topic>Arrays</topic><topic>Computer science; control theory; systems</topic><topic>Control theory. Systems</topic><topic>Costs</topic><topic>Dynamics</topic><topic>Errors</topic><topic>Exact sciences and technology</topic><topic>Histograms</topic><topic>Localization</topic><topic>Loudspeakers</topic><topic>Maximum likelihood estimation</topic><topic>Microphone arrays</topic><topic>Microphones</topic><topic>Miscellaneous</topic><topic>Observability</topic><topic>Orientation</topic><topic>Position (location)</topic><topic>Sensor arrays</topic><topic>Sensor fusion</topic><topic>Simulation</topic><topic>Speech</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Aarabi, P.</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>Pascal-Francis</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Mechanical &amp; Transportation Engineering Abstracts</collection><collection>Technology Research Database</collection><collection>Engineering Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>Aerospace Database</collection><collection>ANTE: Abstracts in New Technology &amp; Engineering</collection><jtitle>IEEE transactions on human-machine systems</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Aarabi, P.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Self-localizing dynamic microphone arrays</atitle><jtitle>IEEE transactions on human-machine systems</jtitle><stitle>TSMCC</stitle><date>2002-11-01</date><risdate>2002</risdate><volume>32</volume><issue>4</issue><spage>474</spage><epage>484</epage><pages>474-484</pages><issn>1094-6977</issn><issn>2168-2291</issn><eissn>1558-2442</eissn><eissn>2168-2305</eissn><coden>ITCRFH</coden><abstract>This paper introduces a mechanism for localizing a microphone array when the location of sound sources in the environment is known. Using the proposed spatial observability function based microphone array integration technique, a maximum likelihood estimator for the correct position and orientation of the array is derived. This is used to localize and track a microphone array with a known and fixed geometrical structure, which can be viewed as the inverse sound localization problem. Simulations using a two-element dynamic microphone array illustrate the ability of the proposed technique to correctly localize and estimate the orientation of the array even in a very reverberant environment. Using 1 s male speech segments from three speakers in a 7 m by 6 m by 2.5 m simulated environment, a 30 cm inter-microphone distance, and PHAT histogram SLF generation, the average localization error was approximately 3 cm with an average orientation error of 19/spl deg/. The same simulation configuration but with 4 s speech segments results in an average localization error less than 1cm, with an average orientation error of approximately 2/spl deg/. Experimental examples illustrate localizations for both stationary and dynamic microphone pairs.</abstract><cop>New-York, NY</cop><pub>IEEE</pub><doi>10.1109/TSMCB.2002.804369</doi><tpages>11</tpages></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 1094-6977
ispartof IEEE transactions on human-machine systems, 2002-11, Vol.32 (4), p.474-484
issn 1094-6977
2168-2291
1558-2442
2168-2305
language eng
recordid cdi_ieee_primary_1176896
source IEEE Electronic Library (IEL)
subjects Acoustic sensors
Applied sciences
Arrays
Computer science
control theory
systems
Control theory. Systems
Costs
Dynamics
Errors
Exact sciences and technology
Histograms
Localization
Loudspeakers
Maximum likelihood estimation
Microphone arrays
Microphones
Miscellaneous
Observability
Orientation
Position (location)
Sensor arrays
Sensor fusion
Simulation
Speech
title Self-localizing dynamic microphone arrays
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-26T00%3A18%3A36IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Self-localizing%20dynamic%20microphone%20arrays&rft.jtitle=IEEE%20transactions%20on%20human-machine%20systems&rft.au=Aarabi,%20P.&rft.date=2002-11-01&rft.volume=32&rft.issue=4&rft.spage=474&rft.epage=484&rft.pages=474-484&rft.issn=1094-6977&rft.eissn=1558-2442&rft.coden=ITCRFH&rft_id=info:doi/10.1109/TSMCB.2002.804369&rft_dat=%3Cproquest_RIE%3E2430658311%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=884472432&rft_id=info:pmid/&rft_ieee_id=1176896&rfr_iscdi=true