Automated remote speech‐based testing of individuals with cognitive decline: Bayesian agreement of transcription accuracy

Introduction We investigated the agreement between automated and gold‐standard manual transcriptions of telephone chatbot‐based semantic verbal fluency testing. Methods We examined 78 cases from the Screening over Speech in Unselected Populations for Clinical Trials in AD (PROSPECT‐AD) study, includ...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Alzheimer's & dementia : diagnosis, assessment & disease monitoring assessment & disease monitoring, 2024-10, Vol.16 (4), p.e70011-n/a
Hauptverfasser: König, Alexandra, Köhler, Stefanie, Tröger, Johannes, Düzel, Emrah, Glanz, Wenzel, Butryn, Michaela, Mallick, Elisa, Priller, Josef, Altenstein, Slawek, Spottke, Annika, Kimmich, Okka, Falkenburger, Björn, Osterrath, Antje, Wiltfang, Jens, Bartels, Claudia, Kilimann, Ingo, Laske, Christoph, Munk, Matthias H., Roeske, Sandra, Frommann, Ingo, Hoffmann, Daniel C., Jessen, Frank, Wagner, Michael, Linz, Nicklas, Teipel, Stefan
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Introduction We investigated the agreement between automated and gold‐standard manual transcriptions of telephone chatbot‐based semantic verbal fluency testing. Methods We examined 78 cases from the Screening over Speech in Unselected Populations for Clinical Trials in AD (PROSPECT‐AD) study, including cognitively normal individuals and individuals with subjective cognitive decline, mild cognitive impairment, and dementia. We used Bayesian Bland–Altman analysis of word count and the qualitative features of semantic cluster size, cluster switches, and word frequencies. Results We found high levels of agreement for word count, with a 93% probability of a newly observed difference being below the minimally important difference. The qualitative features had fair levels of agreement. Word count reached high levels of discrimination between cognitively impaired and unimpaired individuals, regardless of transcription mode. Discussion Our results support the use of automated speech recognition particularly for the assessment of quantitative speech features, even when using data from telephone calls with cognitively impaired individuals in their homes. Highlights High levels of agreement were found between automated and gold‐standard manual transcriptions of telephone chatbot‐based semantic verbal fluency testing, particularly for word count. The qualitative features had fair levels of agreement. Word count reached high levels of discrimination between cognitively impaired and unimpaired individuals, regardless of transcription mode. Automated speech recognition for the assessment of quantitative and qualitative speech features, even when using data from telephone calls with cognitively impaired individuals in their homes, seems feasible and reliable.
ISSN:2352-8729
2352-8729
DOI:10.1002/dad2.70011