Articulatory response to delayed and real-time feedback based on regional tongue displacements

Speech is one of the most complex motor tasks, due to its rapid timing and necessary precision. The difficulty of measuring articulatory movement in real time has made it difficult to investigate motion-based biofeedback for speech. Previously, we demonstrated the use of an automatic measure of tong...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:The Journal of the Acoustical Society of America 2022-10, Vol.152 (4), p.A199-A199
Hauptverfasser: Dugan, Sarah, Li, Sarah R., Eary, Kathryn, Spotts, AnnaKate, Schoenleb, Nicholas S., Connolly, Ben, Seward, Renee, Riley, Michael A., Mast, T. Douglas, Boyce, Suzanne
Format: Artikel
Sprache:eng
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page A199
container_issue 4
container_start_page A199
container_title The Journal of the Acoustical Society of America
container_volume 152
creator Dugan, Sarah
Li, Sarah R.
Eary, Kathryn
Spotts, AnnaKate
Schoenleb, Nicholas S.
Connolly, Ben
Seward, Renee
Riley, Michael A.
Mast, T. Douglas
Boyce, Suzanne
description Speech is one of the most complex motor tasks, due to its rapid timing and necessary precision. The difficulty of measuring articulatory movement in real time has made it difficult to investigate motion-based biofeedback for speech. Previously, we demonstrated the use of an automatic measure of tongue movement accuracy from ultrasound imaging. Using this measure for articulatory biofeedback in a simplified, game-like display may benefit the learning of speech movement patterns. To better understand real-time articulatory biofeedback and improve the design of this display, this study presented articulatory biofeedback for the target word /ɑr/ (“are”) in a game with two conditions for feedback timing (delayed and concurrent, indicating whether the game object started moving after or during speech production) and for difficulty level (easy and hard target width, indicating the articulatory precision necessary for achieving the target). For each participant, two blocks of biofeedback for 20–50 productions were presented (randomizing whether the delayed or concurrent block was presented first) in one collection session, with the difficulty level randomized for each production within each block. Data from nine children with typical speech or residual speech sound disorder were analyzed, showing that response and preference of feedback condition vary among individuals.
doi_str_mv 10.1121/10.0016021
format Article
fullrecord <record><control><sourceid>scitation_cross</sourceid><recordid>TN_cdi_scitation_primary_10_1121_10_0016021</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>jasa</sourcerecordid><originalsourceid>FETCH-LOGICAL-c711-e81235f149c45eaafaa853069ffc741ade5fed5f38a533ba909a5842787965a73</originalsourceid><addsrcrecordid>eNp9kE9LAzEQxYMoWKsXP0HOympms9lNjqVoFQpeerbMJpOyun9Kkh767U1pz54e895vhuEx9gjiBaCE16xCQC1KuGIzUKUotCqrazYT2S4qU9e37C7GnzwqLc2MfS9C6uyhxzSFIw8U99MYiaeJO-rxSI7j6LKPfZG6gbgnci3aX95izOE05mzXTSP2eWfcHYi7Lu57tDTQmOI9u_HYR3q46Jxt3t82y49i_bX6XC7WhW0ACtJQSuWhMrZShOgRtZKiNt7bpgJ0pDw55aVGJWWLRhhUuiob3ZhaYSPn7Ol81oYpxkB-uw_dgOG4BbE9FXPSSzEZfj7D0XYJU37-P_oPcKRj6w</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Articulatory response to delayed and real-time feedback based on regional tongue displacements</title><source>AIP Journals Complete</source><source>Alma/SFX Local Collection</source><source>AIP Acoustical Society of America</source><creator>Dugan, Sarah ; Li, Sarah R. ; Eary, Kathryn ; Spotts, AnnaKate ; Schoenleb, Nicholas S. ; Connolly, Ben ; Seward, Renee ; Riley, Michael A. ; Mast, T. Douglas ; Boyce, Suzanne</creator><creatorcontrib>Dugan, Sarah ; Li, Sarah R. ; Eary, Kathryn ; Spotts, AnnaKate ; Schoenleb, Nicholas S. ; Connolly, Ben ; Seward, Renee ; Riley, Michael A. ; Mast, T. Douglas ; Boyce, Suzanne</creatorcontrib><description>Speech is one of the most complex motor tasks, due to its rapid timing and necessary precision. The difficulty of measuring articulatory movement in real time has made it difficult to investigate motion-based biofeedback for speech. Previously, we demonstrated the use of an automatic measure of tongue movement accuracy from ultrasound imaging. Using this measure for articulatory biofeedback in a simplified, game-like display may benefit the learning of speech movement patterns. To better understand real-time articulatory biofeedback and improve the design of this display, this study presented articulatory biofeedback for the target word /ɑr/ (“are”) in a game with two conditions for feedback timing (delayed and concurrent, indicating whether the game object started moving after or during speech production) and for difficulty level (easy and hard target width, indicating the articulatory precision necessary for achieving the target). For each participant, two blocks of biofeedback for 20–50 productions were presented (randomizing whether the delayed or concurrent block was presented first) in one collection session, with the difficulty level randomized for each production within each block. Data from nine children with typical speech or residual speech sound disorder were analyzed, showing that response and preference of feedback condition vary among individuals.</description><identifier>ISSN: 0001-4966</identifier><identifier>EISSN: 1520-8524</identifier><identifier>DOI: 10.1121/10.0016021</identifier><identifier>CODEN: JASMAN</identifier><language>eng</language><ispartof>The Journal of the Acoustical Society of America, 2022-10, Vol.152 (4), p.A199-A199</ispartof><rights>Acoustical Society of America</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://pubs.aip.org/jasa/article-lookup/doi/10.1121/10.0016021$$EHTML$$P50$$Gscitation$$H</linktohtml><link.rule.ids>207,208,314,776,780,790,1559,4498,27901,27902,76127</link.rule.ids></links><search><creatorcontrib>Dugan, Sarah</creatorcontrib><creatorcontrib>Li, Sarah R.</creatorcontrib><creatorcontrib>Eary, Kathryn</creatorcontrib><creatorcontrib>Spotts, AnnaKate</creatorcontrib><creatorcontrib>Schoenleb, Nicholas S.</creatorcontrib><creatorcontrib>Connolly, Ben</creatorcontrib><creatorcontrib>Seward, Renee</creatorcontrib><creatorcontrib>Riley, Michael A.</creatorcontrib><creatorcontrib>Mast, T. Douglas</creatorcontrib><creatorcontrib>Boyce, Suzanne</creatorcontrib><title>Articulatory response to delayed and real-time feedback based on regional tongue displacements</title><title>The Journal of the Acoustical Society of America</title><description>Speech is one of the most complex motor tasks, due to its rapid timing and necessary precision. The difficulty of measuring articulatory movement in real time has made it difficult to investigate motion-based biofeedback for speech. Previously, we demonstrated the use of an automatic measure of tongue movement accuracy from ultrasound imaging. Using this measure for articulatory biofeedback in a simplified, game-like display may benefit the learning of speech movement patterns. To better understand real-time articulatory biofeedback and improve the design of this display, this study presented articulatory biofeedback for the target word /ɑr/ (“are”) in a game with two conditions for feedback timing (delayed and concurrent, indicating whether the game object started moving after or during speech production) and for difficulty level (easy and hard target width, indicating the articulatory precision necessary for achieving the target). For each participant, two blocks of biofeedback for 20–50 productions were presented (randomizing whether the delayed or concurrent block was presented first) in one collection session, with the difficulty level randomized for each production within each block. Data from nine children with typical speech or residual speech sound disorder were analyzed, showing that response and preference of feedback condition vary among individuals.</description><issn>0001-4966</issn><issn>1520-8524</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><recordid>eNp9kE9LAzEQxYMoWKsXP0HOympms9lNjqVoFQpeerbMJpOyun9Kkh767U1pz54e895vhuEx9gjiBaCE16xCQC1KuGIzUKUotCqrazYT2S4qU9e37C7GnzwqLc2MfS9C6uyhxzSFIw8U99MYiaeJO-rxSI7j6LKPfZG6gbgnci3aX95izOE05mzXTSP2eWfcHYi7Lu57tDTQmOI9u_HYR3q46Jxt3t82y49i_bX6XC7WhW0ACtJQSuWhMrZShOgRtZKiNt7bpgJ0pDw55aVGJWWLRhhUuiob3ZhaYSPn7Ol81oYpxkB-uw_dgOG4BbE9FXPSSzEZfj7D0XYJU37-P_oPcKRj6w</recordid><startdate>202210</startdate><enddate>202210</enddate><creator>Dugan, Sarah</creator><creator>Li, Sarah R.</creator><creator>Eary, Kathryn</creator><creator>Spotts, AnnaKate</creator><creator>Schoenleb, Nicholas S.</creator><creator>Connolly, Ben</creator><creator>Seward, Renee</creator><creator>Riley, Michael A.</creator><creator>Mast, T. Douglas</creator><creator>Boyce, Suzanne</creator><scope>AAYXX</scope><scope>CITATION</scope></search><sort><creationdate>202210</creationdate><title>Articulatory response to delayed and real-time feedback based on regional tongue displacements</title><author>Dugan, Sarah ; Li, Sarah R. ; Eary, Kathryn ; Spotts, AnnaKate ; Schoenleb, Nicholas S. ; Connolly, Ben ; Seward, Renee ; Riley, Michael A. ; Mast, T. Douglas ; Boyce, Suzanne</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c711-e81235f149c45eaafaa853069ffc741ade5fed5f38a533ba909a5842787965a73</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Dugan, Sarah</creatorcontrib><creatorcontrib>Li, Sarah R.</creatorcontrib><creatorcontrib>Eary, Kathryn</creatorcontrib><creatorcontrib>Spotts, AnnaKate</creatorcontrib><creatorcontrib>Schoenleb, Nicholas S.</creatorcontrib><creatorcontrib>Connolly, Ben</creatorcontrib><creatorcontrib>Seward, Renee</creatorcontrib><creatorcontrib>Riley, Michael A.</creatorcontrib><creatorcontrib>Mast, T. Douglas</creatorcontrib><creatorcontrib>Boyce, Suzanne</creatorcontrib><collection>CrossRef</collection><jtitle>The Journal of the Acoustical Society of America</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Dugan, Sarah</au><au>Li, Sarah R.</au><au>Eary, Kathryn</au><au>Spotts, AnnaKate</au><au>Schoenleb, Nicholas S.</au><au>Connolly, Ben</au><au>Seward, Renee</au><au>Riley, Michael A.</au><au>Mast, T. Douglas</au><au>Boyce, Suzanne</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Articulatory response to delayed and real-time feedback based on regional tongue displacements</atitle><jtitle>The Journal of the Acoustical Society of America</jtitle><date>2022-10</date><risdate>2022</risdate><volume>152</volume><issue>4</issue><spage>A199</spage><epage>A199</epage><pages>A199-A199</pages><issn>0001-4966</issn><eissn>1520-8524</eissn><coden>JASMAN</coden><abstract>Speech is one of the most complex motor tasks, due to its rapid timing and necessary precision. The difficulty of measuring articulatory movement in real time has made it difficult to investigate motion-based biofeedback for speech. Previously, we demonstrated the use of an automatic measure of tongue movement accuracy from ultrasound imaging. Using this measure for articulatory biofeedback in a simplified, game-like display may benefit the learning of speech movement patterns. To better understand real-time articulatory biofeedback and improve the design of this display, this study presented articulatory biofeedback for the target word /ɑr/ (“are”) in a game with two conditions for feedback timing (delayed and concurrent, indicating whether the game object started moving after or during speech production) and for difficulty level (easy and hard target width, indicating the articulatory precision necessary for achieving the target). For each participant, two blocks of biofeedback for 20–50 productions were presented (randomizing whether the delayed or concurrent block was presented first) in one collection session, with the difficulty level randomized for each production within each block. Data from nine children with typical speech or residual speech sound disorder were analyzed, showing that response and preference of feedback condition vary among individuals.</abstract><doi>10.1121/10.0016021</doi><tpages>1</tpages></addata></record>
fulltext fulltext
identifier ISSN: 0001-4966
ispartof The Journal of the Acoustical Society of America, 2022-10, Vol.152 (4), p.A199-A199
issn 0001-4966
1520-8524
language eng
recordid cdi_scitation_primary_10_1121_10_0016021
source AIP Journals Complete; Alma/SFX Local Collection; AIP Acoustical Society of America
title Articulatory response to delayed and real-time feedback based on regional tongue displacements
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-03T22%3A26%3A59IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-scitation_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Articulatory%20response%20to%20delayed%20and%20real-time%20feedback%20based%20on%20regional%20tongue%20displacements&rft.jtitle=The%20Journal%20of%20the%20Acoustical%20Society%20of%20America&rft.au=Dugan,%20Sarah&rft.date=2022-10&rft.volume=152&rft.issue=4&rft.spage=A199&rft.epage=A199&rft.pages=A199-A199&rft.issn=0001-4966&rft.eissn=1520-8524&rft.coden=JASMAN&rft_id=info:doi/10.1121/10.0016021&rft_dat=%3Cscitation_cross%3Ejasa%3C/scitation_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true