Robust and real-time decoding of selective auditory attention from M/EEG: A state-space modeling approach

Humans are able to identify and track a target speaker amid a cacophony of acoustic interference, which is often referred to as the cocktail party phenomenon. Results from several decades of studying this phenomenon have culminated in recent years in various promising attempts to decode the attentio...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:The Journal of the Acoustical Society of America 2018-03, Vol.143 (3), p.1743-1743
Hauptverfasser: Miran, Sina, Akram, Sahar, Sheikhattar, Alireza, Simon, Jonathan Z., Zhang, Tao, Babadi, Behtash
Format: Artikel
Sprache:eng
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 1743
container_issue 3
container_start_page 1743
container_title The Journal of the Acoustical Society of America
container_volume 143
creator Miran, Sina
Akram, Sahar
Sheikhattar, Alireza
Simon, Jonathan Z.
Zhang, Tao
Babadi, Behtash
description Humans are able to identify and track a target speaker amid a cacophony of acoustic interference, which is often referred to as the cocktail party phenomenon. Results from several decades of studying this phenomenon have culminated in recent years in various promising attempts to decode the attentional state of a listener in a competing-speaker environment from M/EEG recordings. Most existing approaches operate in an offline fashion and require the entire data duration and multiple trials to provide robust results. Therefore, they cannot be used in emerging applications such as smart hearing aids, where a single trial must be used in real-time to decode the attentional state. In this work, we close this gap by integrating various techniques from state-space modeling paradigm such as adaptive filtering, sparse estimation, and Expectation-Maximization, and devise a framework for robust and real-time decoding of the attentional state from M/EEG recordings. We validate the performance of this framework using comprehensive simulations as well as application to experimentally acquired M/EEG data. Our results reveal that the proposed real-time algorithms perform nearly as accurate as the existing state-of-the-art offline techniques, while providing a significant degree of adaptivity, statistical robustness, and computational savings.
doi_str_mv 10.1121/1.5035690
format Article
fullrecord <record><control><sourceid>scitation_cross</sourceid><recordid>TN_cdi_crossref_primary_10_1121_1_5035690</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>jasa</sourcerecordid><originalsourceid>FETCH-LOGICAL-c690-abe97127cb517d369d83ecb470118fb76b08743228953ad1879708b14a314e863</originalsourceid><addsrcrecordid>eNp9kF1LwzAYhYMoOD8u_Ae5VciWt2mb1Lsx5hQmguy-pMlbjbRNSTJh_96O7dqrw4GHwzmHkAfgc4AMFjAvuCjKil-QGRQZZ6rI8ksy45wDy6uyvCY3Mf5MtlCimhH36Zt9TFQPlgbUHUuuR2rReOuGL-pbGrFDk9wvUr23LvlwoDolHJLzA22D7-n7Yr3ePNMljUknZHHUBmnvLXbHCD2OwWvzfUeuWt1FvD_rLdm9rHerV7b92LytlltmptZMN1hJyKRpCpBWlJVVAk2TSw6g2kaWDVcyF1mmqkJoC0pWkqsGci0gR1WKW_J4ijXBxxiwrcfgeh0ONfD6eFEN9fmiiX06sdG4qfo06B_4D_9hZUc</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Robust and real-time decoding of selective auditory attention from M/EEG: A state-space modeling approach</title><source>AIP Journals Complete</source><source>Alma/SFX Local Collection</source><source>AIP Acoustical Society of America</source><creator>Miran, Sina ; Akram, Sahar ; Sheikhattar, Alireza ; Simon, Jonathan Z. ; Zhang, Tao ; Babadi, Behtash</creator><creatorcontrib>Miran, Sina ; Akram, Sahar ; Sheikhattar, Alireza ; Simon, Jonathan Z. ; Zhang, Tao ; Babadi, Behtash</creatorcontrib><description>Humans are able to identify and track a target speaker amid a cacophony of acoustic interference, which is often referred to as the cocktail party phenomenon. Results from several decades of studying this phenomenon have culminated in recent years in various promising attempts to decode the attentional state of a listener in a competing-speaker environment from M/EEG recordings. Most existing approaches operate in an offline fashion and require the entire data duration and multiple trials to provide robust results. Therefore, they cannot be used in emerging applications such as smart hearing aids, where a single trial must be used in real-time to decode the attentional state. In this work, we close this gap by integrating various techniques from state-space modeling paradigm such as adaptive filtering, sparse estimation, and Expectation-Maximization, and devise a framework for robust and real-time decoding of the attentional state from M/EEG recordings. We validate the performance of this framework using comprehensive simulations as well as application to experimentally acquired M/EEG data. Our results reveal that the proposed real-time algorithms perform nearly as accurate as the existing state-of-the-art offline techniques, while providing a significant degree of adaptivity, statistical robustness, and computational savings.</description><identifier>ISSN: 0001-4966</identifier><identifier>EISSN: 1520-8524</identifier><identifier>DOI: 10.1121/1.5035690</identifier><identifier>CODEN: JASMAN</identifier><language>eng</language><ispartof>The Journal of the Acoustical Society of America, 2018-03, Vol.143 (3), p.1743-1743</ispartof><rights>Acoustical Society of America</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://pubs.aip.org/jasa/article-lookup/doi/10.1121/1.5035690$$EHTML$$P50$$Gscitation$$H</linktohtml><link.rule.ids>207,208,314,776,780,790,1559,4498,27901,27902,76127</link.rule.ids></links><search><creatorcontrib>Miran, Sina</creatorcontrib><creatorcontrib>Akram, Sahar</creatorcontrib><creatorcontrib>Sheikhattar, Alireza</creatorcontrib><creatorcontrib>Simon, Jonathan Z.</creatorcontrib><creatorcontrib>Zhang, Tao</creatorcontrib><creatorcontrib>Babadi, Behtash</creatorcontrib><title>Robust and real-time decoding of selective auditory attention from M/EEG: A state-space modeling approach</title><title>The Journal of the Acoustical Society of America</title><description>Humans are able to identify and track a target speaker amid a cacophony of acoustic interference, which is often referred to as the cocktail party phenomenon. Results from several decades of studying this phenomenon have culminated in recent years in various promising attempts to decode the attentional state of a listener in a competing-speaker environment from M/EEG recordings. Most existing approaches operate in an offline fashion and require the entire data duration and multiple trials to provide robust results. Therefore, they cannot be used in emerging applications such as smart hearing aids, where a single trial must be used in real-time to decode the attentional state. In this work, we close this gap by integrating various techniques from state-space modeling paradigm such as adaptive filtering, sparse estimation, and Expectation-Maximization, and devise a framework for robust and real-time decoding of the attentional state from M/EEG recordings. We validate the performance of this framework using comprehensive simulations as well as application to experimentally acquired M/EEG data. Our results reveal that the proposed real-time algorithms perform nearly as accurate as the existing state-of-the-art offline techniques, while providing a significant degree of adaptivity, statistical robustness, and computational savings.</description><issn>0001-4966</issn><issn>1520-8524</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2018</creationdate><recordtype>article</recordtype><recordid>eNp9kF1LwzAYhYMoOD8u_Ae5VciWt2mb1Lsx5hQmguy-pMlbjbRNSTJh_96O7dqrw4GHwzmHkAfgc4AMFjAvuCjKil-QGRQZZ6rI8ksy45wDy6uyvCY3Mf5MtlCimhH36Zt9TFQPlgbUHUuuR2rReOuGL-pbGrFDk9wvUr23LvlwoDolHJLzA22D7-n7Yr3ePNMljUknZHHUBmnvLXbHCD2OwWvzfUeuWt1FvD_rLdm9rHerV7b92LytlltmptZMN1hJyKRpCpBWlJVVAk2TSw6g2kaWDVcyF1mmqkJoC0pWkqsGci0gR1WKW_J4ijXBxxiwrcfgeh0ONfD6eFEN9fmiiX06sdG4qfo06B_4D_9hZUc</recordid><startdate>201803</startdate><enddate>201803</enddate><creator>Miran, Sina</creator><creator>Akram, Sahar</creator><creator>Sheikhattar, Alireza</creator><creator>Simon, Jonathan Z.</creator><creator>Zhang, Tao</creator><creator>Babadi, Behtash</creator><scope>AAYXX</scope><scope>CITATION</scope></search><sort><creationdate>201803</creationdate><title>Robust and real-time decoding of selective auditory attention from M/EEG: A state-space modeling approach</title><author>Miran, Sina ; Akram, Sahar ; Sheikhattar, Alireza ; Simon, Jonathan Z. ; Zhang, Tao ; Babadi, Behtash</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c690-abe97127cb517d369d83ecb470118fb76b08743228953ad1879708b14a314e863</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2018</creationdate><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Miran, Sina</creatorcontrib><creatorcontrib>Akram, Sahar</creatorcontrib><creatorcontrib>Sheikhattar, Alireza</creatorcontrib><creatorcontrib>Simon, Jonathan Z.</creatorcontrib><creatorcontrib>Zhang, Tao</creatorcontrib><creatorcontrib>Babadi, Behtash</creatorcontrib><collection>CrossRef</collection><jtitle>The Journal of the Acoustical Society of America</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Miran, Sina</au><au>Akram, Sahar</au><au>Sheikhattar, Alireza</au><au>Simon, Jonathan Z.</au><au>Zhang, Tao</au><au>Babadi, Behtash</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Robust and real-time decoding of selective auditory attention from M/EEG: A state-space modeling approach</atitle><jtitle>The Journal of the Acoustical Society of America</jtitle><date>2018-03</date><risdate>2018</risdate><volume>143</volume><issue>3</issue><spage>1743</spage><epage>1743</epage><pages>1743-1743</pages><issn>0001-4966</issn><eissn>1520-8524</eissn><coden>JASMAN</coden><abstract>Humans are able to identify and track a target speaker amid a cacophony of acoustic interference, which is often referred to as the cocktail party phenomenon. Results from several decades of studying this phenomenon have culminated in recent years in various promising attempts to decode the attentional state of a listener in a competing-speaker environment from M/EEG recordings. Most existing approaches operate in an offline fashion and require the entire data duration and multiple trials to provide robust results. Therefore, they cannot be used in emerging applications such as smart hearing aids, where a single trial must be used in real-time to decode the attentional state. In this work, we close this gap by integrating various techniques from state-space modeling paradigm such as adaptive filtering, sparse estimation, and Expectation-Maximization, and devise a framework for robust and real-time decoding of the attentional state from M/EEG recordings. We validate the performance of this framework using comprehensive simulations as well as application to experimentally acquired M/EEG data. Our results reveal that the proposed real-time algorithms perform nearly as accurate as the existing state-of-the-art offline techniques, while providing a significant degree of adaptivity, statistical robustness, and computational savings.</abstract><doi>10.1121/1.5035690</doi><tpages>1</tpages></addata></record>
fulltext fulltext
identifier ISSN: 0001-4966
ispartof The Journal of the Acoustical Society of America, 2018-03, Vol.143 (3), p.1743-1743
issn 0001-4966
1520-8524
language eng
recordid cdi_crossref_primary_10_1121_1_5035690
source AIP Journals Complete; Alma/SFX Local Collection; AIP Acoustical Society of America
title Robust and real-time decoding of selective auditory attention from M/EEG: A state-space modeling approach
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-01T21%3A56%3A48IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-scitation_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Robust%20and%20real-time%20decoding%20of%20selective%20auditory%20attention%20from%20M/EEG:%20A%20state-space%20modeling%20approach&rft.jtitle=The%20Journal%20of%20the%20Acoustical%20Society%20of%20America&rft.au=Miran,%20Sina&rft.date=2018-03&rft.volume=143&rft.issue=3&rft.spage=1743&rft.epage=1743&rft.pages=1743-1743&rft.issn=0001-4966&rft.eissn=1520-8524&rft.coden=JASMAN&rft_id=info:doi/10.1121/1.5035690&rft_dat=%3Cscitation_cross%3Ejasa%3C/scitation_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true