Using an Automated Speech Recognition Approach to Differentiate Between Normal and Aspirating Swallowing Sounds Recorded from Digital Cervical Auscultation in Children

Use of machine learning to accurately detect aspirating swallowing sounds in children is an evolving field. Previously reported classifiers for the detection of aspirating swallowing sounds in children have reported sensitivities between 79 and 89%. This study aimed to investigate the accuracy of us...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Dysphagia 2022-12, Vol.37 (6), p.1482-1492
Hauptverfasser: Frakking, Thuy T., Chang, Anne B., Carty, Christopher, Newing, Jade, Weir, Kelly A., Schwerin, Belinda, So, Stephen
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 1492
container_issue 6
container_start_page 1482
container_title Dysphagia
container_volume 37
creator Frakking, Thuy T.
Chang, Anne B.
Carty, Christopher
Newing, Jade
Weir, Kelly A.
Schwerin, Belinda
So, Stephen
description Use of machine learning to accurately detect aspirating swallowing sounds in children is an evolving field. Previously reported classifiers for the detection of aspirating swallowing sounds in children have reported sensitivities between 79 and 89%. This study aimed to investigate the accuracy of using an automatic speaker recognition approach to differentiate between normal and aspirating swallowing sounds recorded from digital cervical auscultation in children. We analysed 106 normal swallows from 23 healthy children (median 13 months; 52.1% male) and 18 aspirating swallows from 18 children (median 10.5 months; 61.1% male) who underwent concurrent videofluoroscopic swallow studies with digital cervical auscultation. All swallowing sounds were on thin fluids. A support vector machine classifier with a polynomial kernel was trained on feature vectors that comprised the mean and standard deviation of spectral subband centroids extracted from each swallowing sound in the training set. The trained support vector machine was then used to classify swallowing sounds in the test set. We found high accuracy in the differentiation of aspirating and normal swallowing sounds with 98% overall accuracy. Sensitivity for the detection of aspiration and normal swallowing sounds were 89% and 100%, respectively. There were consistent differences in time, power spectral density and spectral subband centroid features between aspirating and normal swallowing sounds in children. This study provides preliminary research evidence that aspirating and normal swallowing sounds in children can be differentiated accurately using machine learning techniques.
doi_str_mv 10.1007/s00455-022-10410-y
format Article
fullrecord <record><control><sourceid>gale_pubme</sourceid><recordid>TN_cdi_pubmedcentral_primary_oai_pubmedcentral_nih_gov_9643257</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><galeid>A725848173</galeid><sourcerecordid>A725848173</sourcerecordid><originalsourceid>FETCH-LOGICAL-c541t-124324de432f04f87e0c1cc4cf102a628ee01cb0878e9f88893d549591cb775d3</originalsourceid><addsrcrecordid>eNp9Ul1r2zAUFWNlzbL9gT0Mw1724laSrVh-GWTZJ5QN1hX2JlT5ylGxJVeyG_KL9jd3k3TtOsYwyOLe88ERh5AXjJ4wSqvTRGkpRE45zxktGc23j8iMlQXPabmgj8mMsqrOqWA_jsnTlK4oZbxeFE_IcSFozUspZ-TnRXK-zbTPltMYej1Ck50PAGadfQMTWu9GF3A5DDFoHI4he-eshQh-dIjO3sK4AfDZlxB73aFQky3T4KIed7rnG911YbO_hsk3aa8aG3SxMfSo1boRaSuIN87gZTklM3Wj3rs6n63WrmvQ7Bk5srpL8Pz2PycXH95_X33Kz75-_LxanuVGlGzMGcf4ZQN4WlpaWQE1zJjSWEa5XnAJQJm5pLKSUFspZV00oqxFjcOqEk0xJ28OusN02UNjMGbUnRqi63XcqqCderjxbq3acKPqBXqKCgVe3wrEcD1BGlXvkoGu0x7ClBRf8AJtpeAIffUX9CpM0WM8xauikILVnN6jWt2Bct4G9DU7UbWsuJClZAiek5N_oPBroHcmeLAO5w8I_EAwMaQUwd5lZFTt6qUO9VJYL7Wvl9oi6eWfr3NH-d0nBBQHQMKVbyHeR_qP7C-PMd1X</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2733851920</pqid></control><display><type>article</type><title>Using an Automated Speech Recognition Approach to Differentiate Between Normal and Aspirating Swallowing Sounds Recorded from Digital Cervical Auscultation in Children</title><source>MEDLINE</source><source>SpringerLink Journals</source><creator>Frakking, Thuy T. ; Chang, Anne B. ; Carty, Christopher ; Newing, Jade ; Weir, Kelly A. ; Schwerin, Belinda ; So, Stephen</creator><creatorcontrib>Frakking, Thuy T. ; Chang, Anne B. ; Carty, Christopher ; Newing, Jade ; Weir, Kelly A. ; Schwerin, Belinda ; So, Stephen</creatorcontrib><description>Use of machine learning to accurately detect aspirating swallowing sounds in children is an evolving field. Previously reported classifiers for the detection of aspirating swallowing sounds in children have reported sensitivities between 79 and 89%. This study aimed to investigate the accuracy of using an automatic speaker recognition approach to differentiate between normal and aspirating swallowing sounds recorded from digital cervical auscultation in children. We analysed 106 normal swallows from 23 healthy children (median 13 months; 52.1% male) and 18 aspirating swallows from 18 children (median 10.5 months; 61.1% male) who underwent concurrent videofluoroscopic swallow studies with digital cervical auscultation. All swallowing sounds were on thin fluids. A support vector machine classifier with a polynomial kernel was trained on feature vectors that comprised the mean and standard deviation of spectral subband centroids extracted from each swallowing sound in the training set. The trained support vector machine was then used to classify swallowing sounds in the test set. We found high accuracy in the differentiation of aspirating and normal swallowing sounds with 98% overall accuracy. Sensitivity for the detection of aspiration and normal swallowing sounds were 89% and 100%, respectively. There were consistent differences in time, power spectral density and spectral subband centroid features between aspirating and normal swallowing sounds in children. This study provides preliminary research evidence that aspirating and normal swallowing sounds in children can be differentiated accurately using machine learning techniques.</description><identifier>ISSN: 0179-051X</identifier><identifier>ISSN: 1432-0460</identifier><identifier>EISSN: 1432-0460</identifier><identifier>DOI: 10.1007/s00455-022-10410-y</identifier><identifier>PMID: 35092488</identifier><language>eng</language><publisher>New York: Springer US</publisher><subject>Accuracy ; Auscultation ; Auscultation - methods ; Child ; Children ; Deglutition ; Deglutition Disorders - diagnosis ; Dysphagia ; Female ; Gastroenterology ; Hepatology ; Humans ; Imaging ; Learning algorithms ; Machine learning ; Male ; Medical colleges ; Medical imaging equipment ; Medicine ; Medicine &amp; Public Health ; Original ; Original Article ; Otorhinolaryngology ; Radiology ; Sound ; Speech Perception ; Speech recognition ; Support vector machines ; Swallowing</subject><ispartof>Dysphagia, 2022-12, Vol.37 (6), p.1482-1492</ispartof><rights>Crown 2022</rights><rights>2022. Crown.</rights><rights>COPYRIGHT 2022 Springer</rights><rights>Crown 2022. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c541t-124324de432f04f87e0c1cc4cf102a628ee01cb0878e9f88893d549591cb775d3</citedby><cites>FETCH-LOGICAL-c541t-124324de432f04f87e0c1cc4cf102a628ee01cb0878e9f88893d549591cb775d3</cites><orcidid>0000-0003-2724-6919</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s00455-022-10410-y$$EPDF$$P50$$Gspringer$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s00455-022-10410-y$$EHTML$$P50$$Gspringer$$Hfree_for_read</linktohtml><link.rule.ids>230,314,776,780,881,27901,27902,41464,42533,51294</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/35092488$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Frakking, Thuy T.</creatorcontrib><creatorcontrib>Chang, Anne B.</creatorcontrib><creatorcontrib>Carty, Christopher</creatorcontrib><creatorcontrib>Newing, Jade</creatorcontrib><creatorcontrib>Weir, Kelly A.</creatorcontrib><creatorcontrib>Schwerin, Belinda</creatorcontrib><creatorcontrib>So, Stephen</creatorcontrib><title>Using an Automated Speech Recognition Approach to Differentiate Between Normal and Aspirating Swallowing Sounds Recorded from Digital Cervical Auscultation in Children</title><title>Dysphagia</title><addtitle>Dysphagia</addtitle><addtitle>Dysphagia</addtitle><description>Use of machine learning to accurately detect aspirating swallowing sounds in children is an evolving field. Previously reported classifiers for the detection of aspirating swallowing sounds in children have reported sensitivities between 79 and 89%. This study aimed to investigate the accuracy of using an automatic speaker recognition approach to differentiate between normal and aspirating swallowing sounds recorded from digital cervical auscultation in children. We analysed 106 normal swallows from 23 healthy children (median 13 months; 52.1% male) and 18 aspirating swallows from 18 children (median 10.5 months; 61.1% male) who underwent concurrent videofluoroscopic swallow studies with digital cervical auscultation. All swallowing sounds were on thin fluids. A support vector machine classifier with a polynomial kernel was trained on feature vectors that comprised the mean and standard deviation of spectral subband centroids extracted from each swallowing sound in the training set. The trained support vector machine was then used to classify swallowing sounds in the test set. We found high accuracy in the differentiation of aspirating and normal swallowing sounds with 98% overall accuracy. Sensitivity for the detection of aspiration and normal swallowing sounds were 89% and 100%, respectively. There were consistent differences in time, power spectral density and spectral subband centroid features between aspirating and normal swallowing sounds in children. This study provides preliminary research evidence that aspirating and normal swallowing sounds in children can be differentiated accurately using machine learning techniques.</description><subject>Accuracy</subject><subject>Auscultation</subject><subject>Auscultation - methods</subject><subject>Child</subject><subject>Children</subject><subject>Deglutition</subject><subject>Deglutition Disorders - diagnosis</subject><subject>Dysphagia</subject><subject>Female</subject><subject>Gastroenterology</subject><subject>Hepatology</subject><subject>Humans</subject><subject>Imaging</subject><subject>Learning algorithms</subject><subject>Machine learning</subject><subject>Male</subject><subject>Medical colleges</subject><subject>Medical imaging equipment</subject><subject>Medicine</subject><subject>Medicine &amp; Public Health</subject><subject>Original</subject><subject>Original Article</subject><subject>Otorhinolaryngology</subject><subject>Radiology</subject><subject>Sound</subject><subject>Speech Perception</subject><subject>Speech recognition</subject><subject>Support vector machines</subject><subject>Swallowing</subject><issn>0179-051X</issn><issn>1432-0460</issn><issn>1432-0460</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>C6C</sourceid><sourceid>EIF</sourceid><sourceid>BENPR</sourceid><recordid>eNp9Ul1r2zAUFWNlzbL9gT0Mw1724laSrVh-GWTZJ5QN1hX2JlT5ylGxJVeyG_KL9jd3k3TtOsYwyOLe88ERh5AXjJ4wSqvTRGkpRE45zxktGc23j8iMlQXPabmgj8mMsqrOqWA_jsnTlK4oZbxeFE_IcSFozUspZ-TnRXK-zbTPltMYej1Ck50PAGadfQMTWu9GF3A5DDFoHI4he-eshQh-dIjO3sK4AfDZlxB73aFQky3T4KIed7rnG911YbO_hsk3aa8aG3SxMfSo1boRaSuIN87gZTklM3Wj3rs6n63WrmvQ7Bk5srpL8Pz2PycXH95_X33Kz75-_LxanuVGlGzMGcf4ZQN4WlpaWQE1zJjSWEa5XnAJQJm5pLKSUFspZV00oqxFjcOqEk0xJ28OusN02UNjMGbUnRqi63XcqqCderjxbq3acKPqBXqKCgVe3wrEcD1BGlXvkoGu0x7ClBRf8AJtpeAIffUX9CpM0WM8xauikILVnN6jWt2Bct4G9DU7UbWsuJClZAiek5N_oPBroHcmeLAO5w8I_EAwMaQUwd5lZFTt6qUO9VJYL7Wvl9oi6eWfr3NH-d0nBBQHQMKVbyHeR_qP7C-PMd1X</recordid><startdate>20221201</startdate><enddate>20221201</enddate><creator>Frakking, Thuy T.</creator><creator>Chang, Anne B.</creator><creator>Carty, Christopher</creator><creator>Newing, Jade</creator><creator>Weir, Kelly A.</creator><creator>Schwerin, Belinda</creator><creator>So, Stephen</creator><general>Springer US</general><general>Springer</general><general>Springer Nature B.V</general><scope>C6C</scope><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7RV</scope><scope>7TK</scope><scope>7X7</scope><scope>7XB</scope><scope>88E</scope><scope>8AO</scope><scope>8FI</scope><scope>8FJ</scope><scope>8FK</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>BENPR</scope><scope>CCPQU</scope><scope>FYUFA</scope><scope>GHDGH</scope><scope>K9.</scope><scope>KB0</scope><scope>M0S</scope><scope>M1P</scope><scope>NAPCQ</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>7X8</scope><scope>5PM</scope><orcidid>https://orcid.org/0000-0003-2724-6919</orcidid></search><sort><creationdate>20221201</creationdate><title>Using an Automated Speech Recognition Approach to Differentiate Between Normal and Aspirating Swallowing Sounds Recorded from Digital Cervical Auscultation in Children</title><author>Frakking, Thuy T. ; Chang, Anne B. ; Carty, Christopher ; Newing, Jade ; Weir, Kelly A. ; Schwerin, Belinda ; So, Stephen</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c541t-124324de432f04f87e0c1cc4cf102a628ee01cb0878e9f88893d549591cb775d3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Accuracy</topic><topic>Auscultation</topic><topic>Auscultation - methods</topic><topic>Child</topic><topic>Children</topic><topic>Deglutition</topic><topic>Deglutition Disorders - diagnosis</topic><topic>Dysphagia</topic><topic>Female</topic><topic>Gastroenterology</topic><topic>Hepatology</topic><topic>Humans</topic><topic>Imaging</topic><topic>Learning algorithms</topic><topic>Machine learning</topic><topic>Male</topic><topic>Medical colleges</topic><topic>Medical imaging equipment</topic><topic>Medicine</topic><topic>Medicine &amp; Public Health</topic><topic>Original</topic><topic>Original Article</topic><topic>Otorhinolaryngology</topic><topic>Radiology</topic><topic>Sound</topic><topic>Speech Perception</topic><topic>Speech recognition</topic><topic>Support vector machines</topic><topic>Swallowing</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Frakking, Thuy T.</creatorcontrib><creatorcontrib>Chang, Anne B.</creatorcontrib><creatorcontrib>Carty, Christopher</creatorcontrib><creatorcontrib>Newing, Jade</creatorcontrib><creatorcontrib>Weir, Kelly A.</creatorcontrib><creatorcontrib>Schwerin, Belinda</creatorcontrib><creatorcontrib>So, Stephen</creatorcontrib><collection>Springer Nature OA Free Journals</collection><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>Nursing &amp; Allied Health Database</collection><collection>Neurosciences Abstracts</collection><collection>Health &amp; Medical Collection</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>Medical Database (Alumni Edition)</collection><collection>ProQuest Pharma Collection</collection><collection>Hospital Premium Collection</collection><collection>Hospital Premium Collection (Alumni Edition)</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central</collection><collection>ProQuest One Community College</collection><collection>Health Research Premium Collection</collection><collection>Health Research Premium Collection (Alumni)</collection><collection>ProQuest Health &amp; Medical Complete (Alumni)</collection><collection>Nursing &amp; Allied Health Database (Alumni Edition)</collection><collection>Health &amp; Medical Collection (Alumni Edition)</collection><collection>Medical Database</collection><collection>Nursing &amp; Allied Health Premium</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>MEDLINE - Academic</collection><collection>PubMed Central (Full Participant titles)</collection><jtitle>Dysphagia</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Frakking, Thuy T.</au><au>Chang, Anne B.</au><au>Carty, Christopher</au><au>Newing, Jade</au><au>Weir, Kelly A.</au><au>Schwerin, Belinda</au><au>So, Stephen</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Using an Automated Speech Recognition Approach to Differentiate Between Normal and Aspirating Swallowing Sounds Recorded from Digital Cervical Auscultation in Children</atitle><jtitle>Dysphagia</jtitle><stitle>Dysphagia</stitle><addtitle>Dysphagia</addtitle><date>2022-12-01</date><risdate>2022</risdate><volume>37</volume><issue>6</issue><spage>1482</spage><epage>1492</epage><pages>1482-1492</pages><issn>0179-051X</issn><issn>1432-0460</issn><eissn>1432-0460</eissn><abstract>Use of machine learning to accurately detect aspirating swallowing sounds in children is an evolving field. Previously reported classifiers for the detection of aspirating swallowing sounds in children have reported sensitivities between 79 and 89%. This study aimed to investigate the accuracy of using an automatic speaker recognition approach to differentiate between normal and aspirating swallowing sounds recorded from digital cervical auscultation in children. We analysed 106 normal swallows from 23 healthy children (median 13 months; 52.1% male) and 18 aspirating swallows from 18 children (median 10.5 months; 61.1% male) who underwent concurrent videofluoroscopic swallow studies with digital cervical auscultation. All swallowing sounds were on thin fluids. A support vector machine classifier with a polynomial kernel was trained on feature vectors that comprised the mean and standard deviation of spectral subband centroids extracted from each swallowing sound in the training set. The trained support vector machine was then used to classify swallowing sounds in the test set. We found high accuracy in the differentiation of aspirating and normal swallowing sounds with 98% overall accuracy. Sensitivity for the detection of aspiration and normal swallowing sounds were 89% and 100%, respectively. There were consistent differences in time, power spectral density and spectral subband centroid features between aspirating and normal swallowing sounds in children. This study provides preliminary research evidence that aspirating and normal swallowing sounds in children can be differentiated accurately using machine learning techniques.</abstract><cop>New York</cop><pub>Springer US</pub><pmid>35092488</pmid><doi>10.1007/s00455-022-10410-y</doi><tpages>11</tpages><orcidid>https://orcid.org/0000-0003-2724-6919</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 0179-051X
ispartof Dysphagia, 2022-12, Vol.37 (6), p.1482-1492
issn 0179-051X
1432-0460
1432-0460
language eng
recordid cdi_pubmedcentral_primary_oai_pubmedcentral_nih_gov_9643257
source MEDLINE; SpringerLink Journals
subjects Accuracy
Auscultation
Auscultation - methods
Child
Children
Deglutition
Deglutition Disorders - diagnosis
Dysphagia
Female
Gastroenterology
Hepatology
Humans
Imaging
Learning algorithms
Machine learning
Male
Medical colleges
Medical imaging equipment
Medicine
Medicine & Public Health
Original
Original Article
Otorhinolaryngology
Radiology
Sound
Speech Perception
Speech recognition
Support vector machines
Swallowing
title Using an Automated Speech Recognition Approach to Differentiate Between Normal and Aspirating Swallowing Sounds Recorded from Digital Cervical Auscultation in Children
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-29T01%3A55%3A45IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-gale_pubme&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Using%20an%20Automated%20Speech%20Recognition%20Approach%20to%20Differentiate%20Between%20Normal%20and%20Aspirating%20Swallowing%20Sounds%20Recorded%20from%20Digital%20Cervical%20Auscultation%20in%20Children&rft.jtitle=Dysphagia&rft.au=Frakking,%20Thuy%20T.&rft.date=2022-12-01&rft.volume=37&rft.issue=6&rft.spage=1482&rft.epage=1492&rft.pages=1482-1492&rft.issn=0179-051X&rft.eissn=1432-0460&rft_id=info:doi/10.1007/s00455-022-10410-y&rft_dat=%3Cgale_pubme%3EA725848173%3C/gale_pubme%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2733851920&rft_id=info:pmid/35092488&rft_galeid=A725848173&rfr_iscdi=true