The Relevance of the Availability of Visual Speech Cues during Adaptation to Noise-Vocoded Speech

Purpose: This study first aimed to establish whether viewing specific parts of the speaker's face (eyes or mouth), compared to viewing the whole face, affected adaptation to distorted noise-vocoded sentences. Second, this study also aimed to replicate results on processing of distorted speech f...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Journal of speech, language, and hearing research language, and hearing research, 2021-07, Vol.64 (7), p.2513-2528
Hauptverfasser: Trotter, Antony S, Banks, Briony, Adank, Patti
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 2528
container_issue 7
container_start_page 2513
container_title Journal of speech, language, and hearing research
container_volume 64
creator Trotter, Antony S
Banks, Briony
Adank, Patti
description Purpose: This study first aimed to establish whether viewing specific parts of the speaker's face (eyes or mouth), compared to viewing the whole face, affected adaptation to distorted noise-vocoded sentences. Second, this study also aimed to replicate results on processing of distorted speech from lab-based experiments in an online setup. Method: We monitored recognition accuracy online while participants were listening to noise-vocoded sentences. We first established if participants were able to perceive and adapt to audiovisual four-band noise-vocoded sentences when the entire moving face was visible (AV Full). Four further groups were then tested: a group in which participants viewed the moving lower part of the speaker's face (AV Mouth), a group in which participants only see the moving upper part of the face (AV Eyes), a group in which participants could not see the moving lower or upper face (AV Blocked), and a group in which participants saw an image of a still face (AV Still). Results: Participants repeated around 40% of the key words correctly and adapted during the experiment, but only when the moving mouth was visible. In contrast, performance was at floor level, and no adaptation took place, in conditions when the moving mouth was occluded. Conclusions: The results show the importance of being able to observe relevant visual speech information from the speaker's mouth region, but not the eyes/upper face region, when listening and adapting to distorted sentences online. Second, the results also demonstrated that it is feasible to run speech perception and adaptation studies online, but that not all findings reported for lab studies replicate.
doi_str_mv 10.1044/2021_JSLHR-20-00575
format Article
fullrecord <record><control><sourceid>gale_proqu</sourceid><recordid>TN_cdi_proquest_miscellaneous_2544879587</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><galeid>A669908575</galeid><ericid>EJ1307912</ericid><sourcerecordid>A669908575</sourcerecordid><originalsourceid>FETCH-LOGICAL-c513t-d50d1e210dbe1b4af96945e4fc496754f5eacb7c280b2334c2e22acc4732031b3</originalsourceid><addsrcrecordid>eNptkltrGzEQhZfSQtM0v6AUBIXSF6W67q4ejUmaBJNCbq9Cq521FeSVK2kD-feV65BLseZBw-E7wzCcqvpCyTElQvxkhFF9cb04u8KMYEJkI99VB1TKFitK2PvSE8Ww4G37sfqU0j0pj4r6oDI3K0BX4OHBjBZQGFAuwuzBOG86511-3Gp3Lk3Go-sNgF2h-QQJ9VN04xLNerPJJrswohzQZXAJ8F2woYf-Cf9cfRiMT3D09B9Wt6cnN_MzvPj963w-W2ArKc-4l6SnwCjpO6CdMIOqlZAgBitU3UgxSDC2ayxrScc4F5YBY8Za0XBGOO34YfVjN3cTw5-yYdZrlyx4b0YIU9JMCtE2SrZNQb_9h96HKY5lu0IpWdeUK_FCLY0H7cYh5Gjsdqie1bVSpC1nLhTeQy1hhGh8GGFwRX7DH-_hS_Wwdnav4fsrwwqMz6sU_LS9eXoL8h1oY0gpwqA30a1NfNSU6G1M9EtMSqv_xaS4vu5cEJ19dpxcUE4aRRn_C7NYtlc</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2595661394</pqid></control><display><type>article</type><title>The Relevance of the Availability of Visual Speech Cues during Adaptation to Noise-Vocoded Speech</title><source>EBSCOhost Education Source</source><source>Alma/SFX Local Collection</source><creator>Trotter, Antony S ; Banks, Briony ; Adank, Patti</creator><creatorcontrib>Trotter, Antony S ; Banks, Briony ; Adank, Patti</creatorcontrib><description>Purpose: This study first aimed to establish whether viewing specific parts of the speaker's face (eyes or mouth), compared to viewing the whole face, affected adaptation to distorted noise-vocoded sentences. Second, this study also aimed to replicate results on processing of distorted speech from lab-based experiments in an online setup. Method: We monitored recognition accuracy online while participants were listening to noise-vocoded sentences. We first established if participants were able to perceive and adapt to audiovisual four-band noise-vocoded sentences when the entire moving face was visible (AV Full). Four further groups were then tested: a group in which participants viewed the moving lower part of the speaker's face (AV Mouth), a group in which participants only see the moving upper part of the face (AV Eyes), a group in which participants could not see the moving lower or upper face (AV Blocked), and a group in which participants saw an image of a still face (AV Still). Results: Participants repeated around 40% of the key words correctly and adapted during the experiment, but only when the moving mouth was visible. In contrast, performance was at floor level, and no adaptation took place, in conditions when the moving mouth was occluded. Conclusions: The results show the importance of being able to observe relevant visual speech information from the speaker's mouth region, but not the eyes/upper face region, when listening and adapting to distorted sentences online. Second, the results also demonstrated that it is feasible to run speech perception and adaptation studies online, but that not all findings reported for lab studies replicate.</description><identifier>ISSN: 1092-4388</identifier><identifier>EISSN: 1558-9102</identifier><identifier>DOI: 10.1044/2021_JSLHR-20-00575</identifier><language>eng</language><publisher>Rockville: American Speech-Language-Hearing Association</publisher><subject>Accuracy ; Acoustics ; Adaptation ; Adjustment (to Environment) ; Attention ; Auditory Perception ; Cues ; Foreign Countries ; Linguistic research ; Listening ; Listening Comprehension ; Mouth ; Native Speakers ; Noise ; Nonverbal communication ; Oral communication ; Recognition (Psychology) ; Sentences ; Speech ; Speech Improvement ; Speech perception ; Vision ; Visual Acuity ; Visual perception ; Visual Stimuli</subject><ispartof>Journal of speech, language, and hearing research, 2021-07, Vol.64 (7), p.2513-2528</ispartof><rights>COPYRIGHT 2021 American Speech-Language-Hearing Association</rights><rights>Copyright American Speech-Language-Hearing Association Jul 2021</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c513t-d50d1e210dbe1b4af96945e4fc496754f5eacb7c280b2334c2e22acc4732031b3</citedby><cites>FETCH-LOGICAL-c513t-d50d1e210dbe1b4af96945e4fc496754f5eacb7c280b2334c2e22acc4732031b3</cites><orcidid>0000-0002-9525-8551</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27924,27925</link.rule.ids><backlink>$$Uhttp://eric.ed.gov/ERICWebPortal/detail?accno=EJ1307912$$DView record in ERIC$$Hfree_for_read</backlink></links><search><creatorcontrib>Trotter, Antony S</creatorcontrib><creatorcontrib>Banks, Briony</creatorcontrib><creatorcontrib>Adank, Patti</creatorcontrib><title>The Relevance of the Availability of Visual Speech Cues during Adaptation to Noise-Vocoded Speech</title><title>Journal of speech, language, and hearing research</title><description>Purpose: This study first aimed to establish whether viewing specific parts of the speaker's face (eyes or mouth), compared to viewing the whole face, affected adaptation to distorted noise-vocoded sentences. Second, this study also aimed to replicate results on processing of distorted speech from lab-based experiments in an online setup. Method: We monitored recognition accuracy online while participants were listening to noise-vocoded sentences. We first established if participants were able to perceive and adapt to audiovisual four-band noise-vocoded sentences when the entire moving face was visible (AV Full). Four further groups were then tested: a group in which participants viewed the moving lower part of the speaker's face (AV Mouth), a group in which participants only see the moving upper part of the face (AV Eyes), a group in which participants could not see the moving lower or upper face (AV Blocked), and a group in which participants saw an image of a still face (AV Still). Results: Participants repeated around 40% of the key words correctly and adapted during the experiment, but only when the moving mouth was visible. In contrast, performance was at floor level, and no adaptation took place, in conditions when the moving mouth was occluded. Conclusions: The results show the importance of being able to observe relevant visual speech information from the speaker's mouth region, but not the eyes/upper face region, when listening and adapting to distorted sentences online. Second, the results also demonstrated that it is feasible to run speech perception and adaptation studies online, but that not all findings reported for lab studies replicate.</description><subject>Accuracy</subject><subject>Acoustics</subject><subject>Adaptation</subject><subject>Adjustment (to Environment)</subject><subject>Attention</subject><subject>Auditory Perception</subject><subject>Cues</subject><subject>Foreign Countries</subject><subject>Linguistic research</subject><subject>Listening</subject><subject>Listening Comprehension</subject><subject>Mouth</subject><subject>Native Speakers</subject><subject>Noise</subject><subject>Nonverbal communication</subject><subject>Oral communication</subject><subject>Recognition (Psychology)</subject><subject>Sentences</subject><subject>Speech</subject><subject>Speech Improvement</subject><subject>Speech perception</subject><subject>Vision</subject><subject>Visual Acuity</subject><subject>Visual perception</subject><subject>Visual Stimuli</subject><issn>1092-4388</issn><issn>1558-9102</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>8G5</sourceid><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GNUQQ</sourceid><sourceid>GUQSH</sourceid><sourceid>M2O</sourceid><recordid>eNptkltrGzEQhZfSQtM0v6AUBIXSF6W67q4ejUmaBJNCbq9Cq521FeSVK2kD-feV65BLseZBw-E7wzCcqvpCyTElQvxkhFF9cb04u8KMYEJkI99VB1TKFitK2PvSE8Ww4G37sfqU0j0pj4r6oDI3K0BX4OHBjBZQGFAuwuzBOG86511-3Gp3Lk3Go-sNgF2h-QQJ9VN04xLNerPJJrswohzQZXAJ8F2woYf-Cf9cfRiMT3D09B9Wt6cnN_MzvPj963w-W2ArKc-4l6SnwCjpO6CdMIOqlZAgBitU3UgxSDC2ayxrScc4F5YBY8Za0XBGOO34YfVjN3cTw5-yYdZrlyx4b0YIU9JMCtE2SrZNQb_9h96HKY5lu0IpWdeUK_FCLY0H7cYh5Gjsdqie1bVSpC1nLhTeQy1hhGh8GGFwRX7DH-_hS_Wwdnav4fsrwwqMz6sU_LS9eXoL8h1oY0gpwqA30a1NfNSU6G1M9EtMSqv_xaS4vu5cEJ19dpxcUE4aRRn_C7NYtlc</recordid><startdate>20210701</startdate><enddate>20210701</enddate><creator>Trotter, Antony S</creator><creator>Banks, Briony</creator><creator>Adank, Patti</creator><general>American Speech-Language-Hearing Association</general><scope>7SW</scope><scope>BJH</scope><scope>BNH</scope><scope>BNI</scope><scope>BNJ</scope><scope>BNO</scope><scope>ERI</scope><scope>PET</scope><scope>REK</scope><scope>WWN</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>0-V</scope><scope>3V.</scope><scope>7RV</scope><scope>7T9</scope><scope>7X7</scope><scope>7XB</scope><scope>88B</scope><scope>88E</scope><scope>88G</scope><scope>88I</scope><scope>88J</scope><scope>8A4</scope><scope>8AF</scope><scope>8FI</scope><scope>8FJ</scope><scope>8FK</scope><scope>8G5</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ALSLI</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>CCPQU</scope><scope>CJNVE</scope><scope>CPGLG</scope><scope>CRLPW</scope><scope>DWQXO</scope><scope>FYUFA</scope><scope>GHDGH</scope><scope>GNUQQ</scope><scope>GUQSH</scope><scope>HCIFZ</scope><scope>K9.</scope><scope>KB0</scope><scope>M0P</scope><scope>M0S</scope><scope>M1P</scope><scope>M2M</scope><scope>M2O</scope><scope>M2P</scope><scope>M2R</scope><scope>MBDVC</scope><scope>NAPCQ</scope><scope>PQEDU</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PSYQQ</scope><scope>Q9U</scope><scope>S0X</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0002-9525-8551</orcidid></search><sort><creationdate>20210701</creationdate><title>The Relevance of the Availability of Visual Speech Cues during Adaptation to Noise-Vocoded Speech</title><author>Trotter, Antony S ; Banks, Briony ; Adank, Patti</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c513t-d50d1e210dbe1b4af96945e4fc496754f5eacb7c280b2334c2e22acc4732031b3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Accuracy</topic><topic>Acoustics</topic><topic>Adaptation</topic><topic>Adjustment (to Environment)</topic><topic>Attention</topic><topic>Auditory Perception</topic><topic>Cues</topic><topic>Foreign Countries</topic><topic>Linguistic research</topic><topic>Listening</topic><topic>Listening Comprehension</topic><topic>Mouth</topic><topic>Native Speakers</topic><topic>Noise</topic><topic>Nonverbal communication</topic><topic>Oral communication</topic><topic>Recognition (Psychology)</topic><topic>Sentences</topic><topic>Speech</topic><topic>Speech Improvement</topic><topic>Speech perception</topic><topic>Vision</topic><topic>Visual Acuity</topic><topic>Visual perception</topic><topic>Visual Stimuli</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Trotter, Antony S</creatorcontrib><creatorcontrib>Banks, Briony</creatorcontrib><creatorcontrib>Adank, Patti</creatorcontrib><collection>ERIC</collection><collection>ERIC (Ovid)</collection><collection>ERIC</collection><collection>ERIC</collection><collection>ERIC (Legacy Platform)</collection><collection>ERIC( SilverPlatter )</collection><collection>ERIC</collection><collection>ERIC PlusText (Legacy Platform)</collection><collection>Education Resources Information Center (ERIC)</collection><collection>ERIC</collection><collection>CrossRef</collection><collection>ProQuest Social Sciences Premium Collection</collection><collection>ProQuest Central (Corporate)</collection><collection>Nursing &amp; Allied Health Database</collection><collection>Linguistics and Language Behavior Abstracts (LLBA)</collection><collection>Health &amp; Medical Collection</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>Education Database (Alumni Edition)</collection><collection>Medical Database (Alumni Edition)</collection><collection>Psychology Database (Alumni)</collection><collection>Science Database (Alumni Edition)</collection><collection>Social Science Database (Alumni Edition)</collection><collection>Education Periodicals</collection><collection>STEM Database</collection><collection>Hospital Premium Collection</collection><collection>Hospital Premium Collection (Alumni Edition)</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>Research Library (Alumni Edition)</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Social Science Premium Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>ProQuest One Community College</collection><collection>Education Collection</collection><collection>Linguistics Collection</collection><collection>Linguistics Database</collection><collection>ProQuest Central Korea</collection><collection>Health Research Premium Collection</collection><collection>Health Research Premium Collection (Alumni)</collection><collection>ProQuest Central Student</collection><collection>Research Library Prep</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Health &amp; Medical Complete (Alumni)</collection><collection>Nursing &amp; Allied Health Database (Alumni Edition)</collection><collection>Education Database</collection><collection>Health &amp; Medical Collection (Alumni Edition)</collection><collection>Medical Database</collection><collection>Psychology Database</collection><collection>Research Library</collection><collection>Science Database</collection><collection>Social Science Database</collection><collection>Research Library (Corporate)</collection><collection>Nursing &amp; Allied Health Premium</collection><collection>ProQuest One Education</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>ProQuest One Psychology</collection><collection>ProQuest Central Basic</collection><collection>SIRS Editorial</collection><collection>MEDLINE - Academic</collection><jtitle>Journal of speech, language, and hearing research</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Trotter, Antony S</au><au>Banks, Briony</au><au>Adank, Patti</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><ericid>EJ1307912</ericid><atitle>The Relevance of the Availability of Visual Speech Cues during Adaptation to Noise-Vocoded Speech</atitle><jtitle>Journal of speech, language, and hearing research</jtitle><date>2021-07-01</date><risdate>2021</risdate><volume>64</volume><issue>7</issue><spage>2513</spage><epage>2528</epage><pages>2513-2528</pages><issn>1092-4388</issn><eissn>1558-9102</eissn><abstract>Purpose: This study first aimed to establish whether viewing specific parts of the speaker's face (eyes or mouth), compared to viewing the whole face, affected adaptation to distorted noise-vocoded sentences. Second, this study also aimed to replicate results on processing of distorted speech from lab-based experiments in an online setup. Method: We monitored recognition accuracy online while participants were listening to noise-vocoded sentences. We first established if participants were able to perceive and adapt to audiovisual four-band noise-vocoded sentences when the entire moving face was visible (AV Full). Four further groups were then tested: a group in which participants viewed the moving lower part of the speaker's face (AV Mouth), a group in which participants only see the moving upper part of the face (AV Eyes), a group in which participants could not see the moving lower or upper face (AV Blocked), and a group in which participants saw an image of a still face (AV Still). Results: Participants repeated around 40% of the key words correctly and adapted during the experiment, but only when the moving mouth was visible. In contrast, performance was at floor level, and no adaptation took place, in conditions when the moving mouth was occluded. Conclusions: The results show the importance of being able to observe relevant visual speech information from the speaker's mouth region, but not the eyes/upper face region, when listening and adapting to distorted sentences online. Second, the results also demonstrated that it is feasible to run speech perception and adaptation studies online, but that not all findings reported for lab studies replicate.</abstract><cop>Rockville</cop><pub>American Speech-Language-Hearing Association</pub><doi>10.1044/2021_JSLHR-20-00575</doi><tpages>16</tpages><orcidid>https://orcid.org/0000-0002-9525-8551</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 1092-4388
ispartof Journal of speech, language, and hearing research, 2021-07, Vol.64 (7), p.2513-2528
issn 1092-4388
1558-9102
language eng
recordid cdi_proquest_miscellaneous_2544879587
source EBSCOhost Education Source; Alma/SFX Local Collection
subjects Accuracy
Acoustics
Adaptation
Adjustment (to Environment)
Attention
Auditory Perception
Cues
Foreign Countries
Linguistic research
Listening
Listening Comprehension
Mouth
Native Speakers
Noise
Nonverbal communication
Oral communication
Recognition (Psychology)
Sentences
Speech
Speech Improvement
Speech perception
Vision
Visual Acuity
Visual perception
Visual Stimuli
title The Relevance of the Availability of Visual Speech Cues during Adaptation to Noise-Vocoded Speech
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-24T03%3A00%3A39IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-gale_proqu&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=The%20Relevance%20of%20the%20Availability%20of%20Visual%20Speech%20Cues%20during%20Adaptation%20to%20Noise-Vocoded%20Speech&rft.jtitle=Journal%20of%20speech,%20language,%20and%20hearing%20research&rft.au=Trotter,%20Antony%20S&rft.date=2021-07-01&rft.volume=64&rft.issue=7&rft.spage=2513&rft.epage=2528&rft.pages=2513-2528&rft.issn=1092-4388&rft.eissn=1558-9102&rft_id=info:doi/10.1044/2021_JSLHR-20-00575&rft_dat=%3Cgale_proqu%3EA669908575%3C/gale_proqu%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2595661394&rft_id=info:pmid/&rft_galeid=A669908575&rft_ericid=EJ1307912&rfr_iscdi=true