Speech intelligibility in a realistic virtual sound environment
In the present study, speech intelligibility was evaluated in realistic, controlled conditions. “Critical sound scenarios” were defined as acoustic scenes that hearing aid users considered important, difficult, and common through ecological momentary assessment. These sound scenarios were acquired i...
Gespeichert in:
Veröffentlicht in: | The Journal of the Acoustical Society of America 2021-04, Vol.149 (4), p.2791-2801 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 2801 |
---|---|
container_issue | 4 |
container_start_page | 2791 |
container_title | The Journal of the Acoustical Society of America |
container_volume | 149 |
creator | Mansour, Naim Marschall, Marton May, Tobias Westermann, Adam Dau, Torsten |
description | In the present study, speech intelligibility was evaluated in realistic, controlled conditions. “Critical sound scenarios” were defined as acoustic scenes that hearing aid users considered important, difficult, and common through ecological momentary assessment. These sound scenarios were acquired in the real world using a spherical microphone array and reproduced inside a loudspeaker-based virtual sound environment (VSE) using Ambisonics. Speech reception thresholds (SRT) were measured for normal-hearing (NH) and hearing-impaired (HI) listeners, using sentences from the Danish hearing in noise test, spatially embedded in the acoustic background of an office meeting sound scenario. In addition, speech recognition scores (SRS) were obtained at a fixed signal-to-noise ratio (SNR) of −2.5 dB, corresponding to the median conversational SNR in the office meeting. SRTs measured in the realistic VSE-reproduced background were significantly higher for NH and HI listeners than those obtained with artificial noise presented over headphones, presumably due to an increased amount of modulation masking and a larger cognitive effort required to separate the target speech from the intelligible interferers in the realistic background. SRSs obtained at the fixed SNR in the realistic background could be used to relate the listeners' SI to the potential challenges they experience in the real world. |
doi_str_mv | 10.1121/10.0004779 |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_2522183960</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2522183960</sourcerecordid><originalsourceid>FETCH-LOGICAL-c395t-dd69e05f28e41361eaed1f91f263257b5f6d3d4b94faf61a983a202be4d052173</originalsourceid><addsrcrecordid>eNp9kEtLAzEUhYMotj42_gCZpSijeU-zEim-oOBCXQ-ZyY1G5mWSKfTfm9IqbnR1OJePw7kHoROCLwmh5CopxpgXhdpBUyIozmeC8l00TVeScyXlBB2E8JGsmDG1jyaMKY4VUVN0_TwA1O-Z6yI0jXtzlWtcXCWf6cyDblyIrs6WzsdRN1nox85k0CXfdy108QjtWd0EON7qIXq9u32ZP-SLp_vH-c0ir5kSMTdGKsDC0hlwwiQBDYZYRSyVjIqiElYaZniluNVWEq1mTFNMK-AGC0oKdojONrmD7z9HCLFsXahTZd1BP4aSCkpJek7ihJ5v0Nr3IXiw5eBdq_2qJLhcD7bW7WAJPt3mjlUL5gf9XigBFxsg1C7q6Pru_7g_6WXvf5HlYCz7AqNegYM</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2522183960</pqid></control><display><type>article</type><title>Speech intelligibility in a realistic virtual sound environment</title><source>AIP Journals Complete</source><source>Alma/SFX Local Collection</source><source>AIP Acoustical Society of America</source><creator>Mansour, Naim ; Marschall, Marton ; May, Tobias ; Westermann, Adam ; Dau, Torsten</creator><creatorcontrib>Mansour, Naim ; Marschall, Marton ; May, Tobias ; Westermann, Adam ; Dau, Torsten</creatorcontrib><description>In the present study, speech intelligibility was evaluated in realistic, controlled conditions. “Critical sound scenarios” were defined as acoustic scenes that hearing aid users considered important, difficult, and common through ecological momentary assessment. These sound scenarios were acquired in the real world using a spherical microphone array and reproduced inside a loudspeaker-based virtual sound environment (VSE) using Ambisonics. Speech reception thresholds (SRT) were measured for normal-hearing (NH) and hearing-impaired (HI) listeners, using sentences from the Danish hearing in noise test, spatially embedded in the acoustic background of an office meeting sound scenario. In addition, speech recognition scores (SRS) were obtained at a fixed signal-to-noise ratio (SNR) of −2.5 dB, corresponding to the median conversational SNR in the office meeting. SRTs measured in the realistic VSE-reproduced background were significantly higher for NH and HI listeners than those obtained with artificial noise presented over headphones, presumably due to an increased amount of modulation masking and a larger cognitive effort required to separate the target speech from the intelligible interferers in the realistic background. SRSs obtained at the fixed SNR in the realistic background could be used to relate the listeners' SI to the potential challenges they experience in the real world.</description><identifier>ISSN: 0001-4966</identifier><identifier>EISSN: 1520-8524</identifier><identifier>DOI: 10.1121/10.0004779</identifier><identifier>PMID: 33940919</identifier><identifier>CODEN: JASMAN</identifier><language>eng</language><publisher>United States</publisher><ispartof>The Journal of the Acoustical Society of America, 2021-04, Vol.149 (4), p.2791-2801</ispartof><rights>Author(s)</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c395t-dd69e05f28e41361eaed1f91f263257b5f6d3d4b94faf61a983a202be4d052173</citedby><cites>FETCH-LOGICAL-c395t-dd69e05f28e41361eaed1f91f263257b5f6d3d4b94faf61a983a202be4d052173</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://pubs.aip.org/jasa/article-lookup/doi/10.1121/10.0004779$$EHTML$$P50$$Gscitation$$Hfree_for_read</linktohtml><link.rule.ids>207,208,314,777,781,791,1560,4498,27905,27906,76133</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/33940919$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Mansour, Naim</creatorcontrib><creatorcontrib>Marschall, Marton</creatorcontrib><creatorcontrib>May, Tobias</creatorcontrib><creatorcontrib>Westermann, Adam</creatorcontrib><creatorcontrib>Dau, Torsten</creatorcontrib><title>Speech intelligibility in a realistic virtual sound environment</title><title>The Journal of the Acoustical Society of America</title><addtitle>J Acoust Soc Am</addtitle><description>In the present study, speech intelligibility was evaluated in realistic, controlled conditions. “Critical sound scenarios” were defined as acoustic scenes that hearing aid users considered important, difficult, and common through ecological momentary assessment. These sound scenarios were acquired in the real world using a spherical microphone array and reproduced inside a loudspeaker-based virtual sound environment (VSE) using Ambisonics. Speech reception thresholds (SRT) were measured for normal-hearing (NH) and hearing-impaired (HI) listeners, using sentences from the Danish hearing in noise test, spatially embedded in the acoustic background of an office meeting sound scenario. In addition, speech recognition scores (SRS) were obtained at a fixed signal-to-noise ratio (SNR) of −2.5 dB, corresponding to the median conversational SNR in the office meeting. SRTs measured in the realistic VSE-reproduced background were significantly higher for NH and HI listeners than those obtained with artificial noise presented over headphones, presumably due to an increased amount of modulation masking and a larger cognitive effort required to separate the target speech from the intelligible interferers in the realistic background. SRSs obtained at the fixed SNR in the realistic background could be used to relate the listeners' SI to the potential challenges they experience in the real world.</description><issn>0001-4966</issn><issn>1520-8524</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><recordid>eNp9kEtLAzEUhYMotj42_gCZpSijeU-zEim-oOBCXQ-ZyY1G5mWSKfTfm9IqbnR1OJePw7kHoROCLwmh5CopxpgXhdpBUyIozmeC8l00TVeScyXlBB2E8JGsmDG1jyaMKY4VUVN0_TwA1O-Z6yI0jXtzlWtcXCWf6cyDblyIrs6WzsdRN1nox85k0CXfdy108QjtWd0EON7qIXq9u32ZP-SLp_vH-c0ir5kSMTdGKsDC0hlwwiQBDYZYRSyVjIqiElYaZniluNVWEq1mTFNMK-AGC0oKdojONrmD7z9HCLFsXahTZd1BP4aSCkpJek7ihJ5v0Nr3IXiw5eBdq_2qJLhcD7bW7WAJPt3mjlUL5gf9XigBFxsg1C7q6Pru_7g_6WXvf5HlYCz7AqNegYM</recordid><startdate>202104</startdate><enddate>202104</enddate><creator>Mansour, Naim</creator><creator>Marschall, Marton</creator><creator>May, Tobias</creator><creator>Westermann, Adam</creator><creator>Dau, Torsten</creator><scope>AJDQP</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7X8</scope></search><sort><creationdate>202104</creationdate><title>Speech intelligibility in a realistic virtual sound environment</title><author>Mansour, Naim ; Marschall, Marton ; May, Tobias ; Westermann, Adam ; Dau, Torsten</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c395t-dd69e05f28e41361eaed1f91f263257b5f6d3d4b94faf61a983a202be4d052173</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Mansour, Naim</creatorcontrib><creatorcontrib>Marschall, Marton</creatorcontrib><creatorcontrib>May, Tobias</creatorcontrib><creatorcontrib>Westermann, Adam</creatorcontrib><creatorcontrib>Dau, Torsten</creatorcontrib><collection>AIP Open Access Journals</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>MEDLINE - Academic</collection><jtitle>The Journal of the Acoustical Society of America</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Mansour, Naim</au><au>Marschall, Marton</au><au>May, Tobias</au><au>Westermann, Adam</au><au>Dau, Torsten</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Speech intelligibility in a realistic virtual sound environment</atitle><jtitle>The Journal of the Acoustical Society of America</jtitle><addtitle>J Acoust Soc Am</addtitle><date>2021-04</date><risdate>2021</risdate><volume>149</volume><issue>4</issue><spage>2791</spage><epage>2801</epage><pages>2791-2801</pages><issn>0001-4966</issn><eissn>1520-8524</eissn><coden>JASMAN</coden><abstract>In the present study, speech intelligibility was evaluated in realistic, controlled conditions. “Critical sound scenarios” were defined as acoustic scenes that hearing aid users considered important, difficult, and common through ecological momentary assessment. These sound scenarios were acquired in the real world using a spherical microphone array and reproduced inside a loudspeaker-based virtual sound environment (VSE) using Ambisonics. Speech reception thresholds (SRT) were measured for normal-hearing (NH) and hearing-impaired (HI) listeners, using sentences from the Danish hearing in noise test, spatially embedded in the acoustic background of an office meeting sound scenario. In addition, speech recognition scores (SRS) were obtained at a fixed signal-to-noise ratio (SNR) of −2.5 dB, corresponding to the median conversational SNR in the office meeting. SRTs measured in the realistic VSE-reproduced background were significantly higher for NH and HI listeners than those obtained with artificial noise presented over headphones, presumably due to an increased amount of modulation masking and a larger cognitive effort required to separate the target speech from the intelligible interferers in the realistic background. SRSs obtained at the fixed SNR in the realistic background could be used to relate the listeners' SI to the potential challenges they experience in the real world.</abstract><cop>United States</cop><pmid>33940919</pmid><doi>10.1121/10.0004779</doi><tpages>11</tpages><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0001-4966 |
ispartof | The Journal of the Acoustical Society of America, 2021-04, Vol.149 (4), p.2791-2801 |
issn | 0001-4966 1520-8524 |
language | eng |
recordid | cdi_proquest_miscellaneous_2522183960 |
source | AIP Journals Complete; Alma/SFX Local Collection; AIP Acoustical Society of America |
title | Speech intelligibility in a realistic virtual sound environment |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-19T20%3A42%3A42IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Speech%20intelligibility%20in%20a%20realistic%20virtual%20sound%20environment&rft.jtitle=The%20Journal%20of%20the%20Acoustical%20Society%20of%20America&rft.au=Mansour,%20Naim&rft.date=2021-04&rft.volume=149&rft.issue=4&rft.spage=2791&rft.epage=2801&rft.pages=2791-2801&rft.issn=0001-4966&rft.eissn=1520-8524&rft.coden=JASMAN&rft_id=info:doi/10.1121/10.0004779&rft_dat=%3Cproquest_cross%3E2522183960%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2522183960&rft_id=info:pmid/33940919&rfr_iscdi=true |