Crowdsourcing as a tool in the clinical assessment of intelligibility in dysarthria: How to deal with excessive variation

•Involvement of laypersons in clinical intelligibility assessment is needed.•Crowdsourcing is a way to involve laypersons in clinical dysarthria assessment.•Excessive variability of crowd scores is constrained by weighted aggregation.•Cost-benefit considerations suggest panels of 9 listeners.•The pr...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Journal of communication disorders 2021-09, Vol.93, p.106135-106135, Article 106135
Hauptverfasser: Ziegler, Wolfram, Lehner, Katharina, Klonowski, Madleen, Geißler, Nadine, Ammer, Franziska, Kurfeß, Christina, Grötzbach, Holger, Mandl, Alexander, Knorr, Felicitas, Strecker, Katrin, Schölderle, Theresa, Matern, Sina, Weck, Christiane, Gröne, Berthold, Brühl, Stefanie, Kirchner, Christiane, Kleiter, Ingo, Sühn, Ursula, von Eichmann, Joachim, Möhrle, Christina, Spencer, Pete Guy, Ilg, Rüdiger, Klintwort, Doris, Lubecki, Daniel, Marinho, Steffy, Hogrefe, Katharina
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 106135
container_issue
container_start_page 106135
container_title Journal of communication disorders
container_volume 93
creator Ziegler, Wolfram
Lehner, Katharina
Klonowski, Madleen
Geißler, Nadine
Ammer, Franziska
Kurfeß, Christina
Grötzbach, Holger
Mandl, Alexander
Knorr, Felicitas
Strecker, Katrin
Schölderle, Theresa
Matern, Sina
Weck, Christiane
Gröne, Berthold
Brühl, Stefanie
Kirchner, Christiane
Kleiter, Ingo
Sühn, Ursula
von Eichmann, Joachim
Möhrle, Christina
Spencer, Pete Guy
Ilg, Rüdiger
Klintwort, Doris
Lubecki, Daniel
Marinho, Steffy
Hogrefe, Katharina
description •Involvement of laypersons in clinical intelligibility assessment is needed.•Crowdsourcing is a way to involve laypersons in clinical dysarthria assessment.•Excessive variability of crowd scores is constrained by weighted aggregation.•Cost-benefit considerations suggest panels of 9 listeners.•The proposed method immunizes crowd-based intelligibility scores against spamming. Independent laypersons are essential in the assessment of intelligibility in persons with dysarthria (PWD), as they reflect intelligibility limitations in the most ecologically valid way, without being influenced by familiarity with the speaker. The present work investigated online crowdsourcing as a convenient method to involve lay people as listeners, with the objective of exploring how to constrain the expected variability of crowd-based judgements to make them applicable in clinical diagnostics. Intelligibility was assessed using a word transcription task administered via crowdsourcing. In study 1, speech samples of 23 PWD were transcribed by 18 crowdworkers each. Four methods of aggregating the intelligibility scores of randomly sampled panels of 4 to 14 listeners were compared for accuracy, i.e. the stability of the resulting intelligibility estimates across different panels, and their validity, i.e. the degree to which they matched data obtained under controlled laboratory conditions (“gold standard”). In addition, we determined an economically acceptable number of crowdworkers per speaker which is needed to obtain accurate and valid intelligibility estimates. Study 2 examined the robustness of the chosen aggregation method against downward outliers due to spamming in a larger sample of 100 PWD. In study 1, an interworker aggregation method based on negative exponential weightings of the scores as a function of their distance from the “best” listener's score (exponentially weighted mean) outperformed three other methods (median value, arithmetic mean, maximum). Under cost-benefit considerations, an optimum panel size of 9 crowd listeners per examination was determined. Study 2 demonstrated the robustness of this aggregation method against spamming crowd listeners. Though intelligibility data collected through online crowdsourcing are noisy, accurate and valid intelligibility estimates can be obtained by appropriate aggregation of the raw data. This makes crowdsourcing a suitable method for incorporating real-world perspectives into clinical dysarthria assessment.
doi_str_mv 10.1016/j.jcomdis.2021.106135
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_2548399754</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S0021992421000587</els_id><sourcerecordid>2548399754</sourcerecordid><originalsourceid>FETCH-LOGICAL-c396t-a9fa24257f02657387b49e8da742a67a391929960ada926a63a3ad4002b5088d3</originalsourceid><addsrcrecordid>eNqFkE9rGzEQxUVpIW7aj1DQMZd19WdXu-olBJM2hUAv7VmMpdl4zHqVSLIdf_vIOPeeBua932PmMfZNiqUU0nzfLrc-7gLlpRJK1p2RuvvAFnLoddNb235kC1GVxlrVXrHPOW9F5YyUC3ZapXgMOe6Tp_mJQ-bAS4wTp5mXDXI_0UwepqpkzHmHc-FxrGrBaaInWtNE5XR2h1OGVDaJ4Ad_iMeawgNW8Ehlw_HVV5oOyA9QHYXi_IV9GmHK-PV9XrN_P-__rh6axz-_fq_uHhuvrSkN2BFUq7p-FMp0vR76dWtxCNC3CkwP2kqrrDUCAlhlwGjQENr677oTwxD0Nbu55D6n-LLHXNyOsq_Xw4xxn53q2kFb23dttXYXq08x54Sje060g3RyUrhz1W7r3qt256rdperK3V44rH8cCJPLnnD2GCihLy5E-k_CG7Sti30</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2548399754</pqid></control><display><type>article</type><title>Crowdsourcing as a tool in the clinical assessment of intelligibility in dysarthria: How to deal with excessive variation</title><source>Elsevier ScienceDirect Journals</source><creator>Ziegler, Wolfram ; Lehner, Katharina ; Klonowski, Madleen ; Geißler, Nadine ; Ammer, Franziska ; Kurfeß, Christina ; Grötzbach, Holger ; Mandl, Alexander ; Knorr, Felicitas ; Strecker, Katrin ; Schölderle, Theresa ; Matern, Sina ; Weck, Christiane ; Gröne, Berthold ; Brühl, Stefanie ; Kirchner, Christiane ; Kleiter, Ingo ; Sühn, Ursula ; von Eichmann, Joachim ; Möhrle, Christina ; Spencer, Pete Guy ; Ilg, Rüdiger ; Klintwort, Doris ; Lubecki, Daniel ; Marinho, Steffy ; Hogrefe, Katharina</creator><creatorcontrib>Ziegler, Wolfram ; Lehner, Katharina ; Klonowski, Madleen ; Geißler, Nadine ; Ammer, Franziska ; Kurfeß, Christina ; Grötzbach, Holger ; Mandl, Alexander ; Knorr, Felicitas ; Strecker, Katrin ; Schölderle, Theresa ; Matern, Sina ; Weck, Christiane ; Gröne, Berthold ; Brühl, Stefanie ; Kirchner, Christiane ; Kleiter, Ingo ; Sühn, Ursula ; von Eichmann, Joachim ; Möhrle, Christina ; Spencer, Pete Guy ; Ilg, Rüdiger ; Klintwort, Doris ; Lubecki, Daniel ; Marinho, Steffy ; Hogrefe, Katharina ; KommPaS Study Group</creatorcontrib><description>•Involvement of laypersons in clinical intelligibility assessment is needed.•Crowdsourcing is a way to involve laypersons in clinical dysarthria assessment.•Excessive variability of crowd scores is constrained by weighted aggregation.•Cost-benefit considerations suggest panels of 9 listeners.•The proposed method immunizes crowd-based intelligibility scores against spamming. Independent laypersons are essential in the assessment of intelligibility in persons with dysarthria (PWD), as they reflect intelligibility limitations in the most ecologically valid way, without being influenced by familiarity with the speaker. The present work investigated online crowdsourcing as a convenient method to involve lay people as listeners, with the objective of exploring how to constrain the expected variability of crowd-based judgements to make them applicable in clinical diagnostics. Intelligibility was assessed using a word transcription task administered via crowdsourcing. In study 1, speech samples of 23 PWD were transcribed by 18 crowdworkers each. Four methods of aggregating the intelligibility scores of randomly sampled panels of 4 to 14 listeners were compared for accuracy, i.e. the stability of the resulting intelligibility estimates across different panels, and their validity, i.e. the degree to which they matched data obtained under controlled laboratory conditions (“gold standard”). In addition, we determined an economically acceptable number of crowdworkers per speaker which is needed to obtain accurate and valid intelligibility estimates. Study 2 examined the robustness of the chosen aggregation method against downward outliers due to spamming in a larger sample of 100 PWD. In study 1, an interworker aggregation method based on negative exponential weightings of the scores as a function of their distance from the “best” listener's score (exponentially weighted mean) outperformed three other methods (median value, arithmetic mean, maximum). Under cost-benefit considerations, an optimum panel size of 9 crowd listeners per examination was determined. Study 2 demonstrated the robustness of this aggregation method against spamming crowd listeners. Though intelligibility data collected through online crowdsourcing are noisy, accurate and valid intelligibility estimates can be obtained by appropriate aggregation of the raw data. This makes crowdsourcing a suitable method for incorporating real-world perspectives into clinical dysarthria assessment.</description><identifier>ISSN: 0021-9924</identifier><identifier>EISSN: 1873-7994</identifier><identifier>DOI: 10.1016/j.jcomdis.2021.106135</identifier><language>eng</language><publisher>Elsevier Inc</publisher><subject>Crowdsourcing ; Dysarthria ; Intelligibility ; Quality control ; Validation</subject><ispartof>Journal of communication disorders, 2021-09, Vol.93, p.106135-106135, Article 106135</ispartof><rights>2021</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c396t-a9fa24257f02657387b49e8da742a67a391929960ada926a63a3ad4002b5088d3</citedby><cites>FETCH-LOGICAL-c396t-a9fa24257f02657387b49e8da742a67a391929960ada926a63a3ad4002b5088d3</cites><orcidid>0000-0002-5071-8112 ; 0000-0002-5760-1232</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.sciencedirect.com/science/article/pii/S0021992421000587$$EHTML$$P50$$Gelsevier$$H</linktohtml><link.rule.ids>314,776,780,3537,27901,27902,65306</link.rule.ids></links><search><creatorcontrib>Ziegler, Wolfram</creatorcontrib><creatorcontrib>Lehner, Katharina</creatorcontrib><creatorcontrib>Klonowski, Madleen</creatorcontrib><creatorcontrib>Geißler, Nadine</creatorcontrib><creatorcontrib>Ammer, Franziska</creatorcontrib><creatorcontrib>Kurfeß, Christina</creatorcontrib><creatorcontrib>Grötzbach, Holger</creatorcontrib><creatorcontrib>Mandl, Alexander</creatorcontrib><creatorcontrib>Knorr, Felicitas</creatorcontrib><creatorcontrib>Strecker, Katrin</creatorcontrib><creatorcontrib>Schölderle, Theresa</creatorcontrib><creatorcontrib>Matern, Sina</creatorcontrib><creatorcontrib>Weck, Christiane</creatorcontrib><creatorcontrib>Gröne, Berthold</creatorcontrib><creatorcontrib>Brühl, Stefanie</creatorcontrib><creatorcontrib>Kirchner, Christiane</creatorcontrib><creatorcontrib>Kleiter, Ingo</creatorcontrib><creatorcontrib>Sühn, Ursula</creatorcontrib><creatorcontrib>von Eichmann, Joachim</creatorcontrib><creatorcontrib>Möhrle, Christina</creatorcontrib><creatorcontrib>Spencer, Pete Guy</creatorcontrib><creatorcontrib>Ilg, Rüdiger</creatorcontrib><creatorcontrib>Klintwort, Doris</creatorcontrib><creatorcontrib>Lubecki, Daniel</creatorcontrib><creatorcontrib>Marinho, Steffy</creatorcontrib><creatorcontrib>Hogrefe, Katharina</creatorcontrib><creatorcontrib>KommPaS Study Group</creatorcontrib><title>Crowdsourcing as a tool in the clinical assessment of intelligibility in dysarthria: How to deal with excessive variation</title><title>Journal of communication disorders</title><description>•Involvement of laypersons in clinical intelligibility assessment is needed.•Crowdsourcing is a way to involve laypersons in clinical dysarthria assessment.•Excessive variability of crowd scores is constrained by weighted aggregation.•Cost-benefit considerations suggest panels of 9 listeners.•The proposed method immunizes crowd-based intelligibility scores against spamming. Independent laypersons are essential in the assessment of intelligibility in persons with dysarthria (PWD), as they reflect intelligibility limitations in the most ecologically valid way, without being influenced by familiarity with the speaker. The present work investigated online crowdsourcing as a convenient method to involve lay people as listeners, with the objective of exploring how to constrain the expected variability of crowd-based judgements to make them applicable in clinical diagnostics. Intelligibility was assessed using a word transcription task administered via crowdsourcing. In study 1, speech samples of 23 PWD were transcribed by 18 crowdworkers each. Four methods of aggregating the intelligibility scores of randomly sampled panels of 4 to 14 listeners were compared for accuracy, i.e. the stability of the resulting intelligibility estimates across different panels, and their validity, i.e. the degree to which they matched data obtained under controlled laboratory conditions (“gold standard”). In addition, we determined an economically acceptable number of crowdworkers per speaker which is needed to obtain accurate and valid intelligibility estimates. Study 2 examined the robustness of the chosen aggregation method against downward outliers due to spamming in a larger sample of 100 PWD. In study 1, an interworker aggregation method based on negative exponential weightings of the scores as a function of their distance from the “best” listener's score (exponentially weighted mean) outperformed three other methods (median value, arithmetic mean, maximum). Under cost-benefit considerations, an optimum panel size of 9 crowd listeners per examination was determined. Study 2 demonstrated the robustness of this aggregation method against spamming crowd listeners. Though intelligibility data collected through online crowdsourcing are noisy, accurate and valid intelligibility estimates can be obtained by appropriate aggregation of the raw data. This makes crowdsourcing a suitable method for incorporating real-world perspectives into clinical dysarthria assessment.</description><subject>Crowdsourcing</subject><subject>Dysarthria</subject><subject>Intelligibility</subject><subject>Quality control</subject><subject>Validation</subject><issn>0021-9924</issn><issn>1873-7994</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><recordid>eNqFkE9rGzEQxUVpIW7aj1DQMZd19WdXu-olBJM2hUAv7VmMpdl4zHqVSLIdf_vIOPeeBua932PmMfZNiqUU0nzfLrc-7gLlpRJK1p2RuvvAFnLoddNb235kC1GVxlrVXrHPOW9F5YyUC3ZapXgMOe6Tp_mJQ-bAS4wTp5mXDXI_0UwepqpkzHmHc-FxrGrBaaInWtNE5XR2h1OGVDaJ4Ad_iMeawgNW8Ehlw_HVV5oOyA9QHYXi_IV9GmHK-PV9XrN_P-__rh6axz-_fq_uHhuvrSkN2BFUq7p-FMp0vR76dWtxCNC3CkwP2kqrrDUCAlhlwGjQENr677oTwxD0Nbu55D6n-LLHXNyOsq_Xw4xxn53q2kFb23dttXYXq08x54Sje060g3RyUrhz1W7r3qt256rdperK3V44rH8cCJPLnnD2GCihLy5E-k_CG7Sti30</recordid><startdate>20210901</startdate><enddate>20210901</enddate><creator>Ziegler, Wolfram</creator><creator>Lehner, Katharina</creator><creator>Klonowski, Madleen</creator><creator>Geißler, Nadine</creator><creator>Ammer, Franziska</creator><creator>Kurfeß, Christina</creator><creator>Grötzbach, Holger</creator><creator>Mandl, Alexander</creator><creator>Knorr, Felicitas</creator><creator>Strecker, Katrin</creator><creator>Schölderle, Theresa</creator><creator>Matern, Sina</creator><creator>Weck, Christiane</creator><creator>Gröne, Berthold</creator><creator>Brühl, Stefanie</creator><creator>Kirchner, Christiane</creator><creator>Kleiter, Ingo</creator><creator>Sühn, Ursula</creator><creator>von Eichmann, Joachim</creator><creator>Möhrle, Christina</creator><creator>Spencer, Pete Guy</creator><creator>Ilg, Rüdiger</creator><creator>Klintwort, Doris</creator><creator>Lubecki, Daniel</creator><creator>Marinho, Steffy</creator><creator>Hogrefe, Katharina</creator><general>Elsevier Inc</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0002-5071-8112</orcidid><orcidid>https://orcid.org/0000-0002-5760-1232</orcidid></search><sort><creationdate>20210901</creationdate><title>Crowdsourcing as a tool in the clinical assessment of intelligibility in dysarthria: How to deal with excessive variation</title><author>Ziegler, Wolfram ; Lehner, Katharina ; Klonowski, Madleen ; Geißler, Nadine ; Ammer, Franziska ; Kurfeß, Christina ; Grötzbach, Holger ; Mandl, Alexander ; Knorr, Felicitas ; Strecker, Katrin ; Schölderle, Theresa ; Matern, Sina ; Weck, Christiane ; Gröne, Berthold ; Brühl, Stefanie ; Kirchner, Christiane ; Kleiter, Ingo ; Sühn, Ursula ; von Eichmann, Joachim ; Möhrle, Christina ; Spencer, Pete Guy ; Ilg, Rüdiger ; Klintwort, Doris ; Lubecki, Daniel ; Marinho, Steffy ; Hogrefe, Katharina</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c396t-a9fa24257f02657387b49e8da742a67a391929960ada926a63a3ad4002b5088d3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Crowdsourcing</topic><topic>Dysarthria</topic><topic>Intelligibility</topic><topic>Quality control</topic><topic>Validation</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Ziegler, Wolfram</creatorcontrib><creatorcontrib>Lehner, Katharina</creatorcontrib><creatorcontrib>Klonowski, Madleen</creatorcontrib><creatorcontrib>Geißler, Nadine</creatorcontrib><creatorcontrib>Ammer, Franziska</creatorcontrib><creatorcontrib>Kurfeß, Christina</creatorcontrib><creatorcontrib>Grötzbach, Holger</creatorcontrib><creatorcontrib>Mandl, Alexander</creatorcontrib><creatorcontrib>Knorr, Felicitas</creatorcontrib><creatorcontrib>Strecker, Katrin</creatorcontrib><creatorcontrib>Schölderle, Theresa</creatorcontrib><creatorcontrib>Matern, Sina</creatorcontrib><creatorcontrib>Weck, Christiane</creatorcontrib><creatorcontrib>Gröne, Berthold</creatorcontrib><creatorcontrib>Brühl, Stefanie</creatorcontrib><creatorcontrib>Kirchner, Christiane</creatorcontrib><creatorcontrib>Kleiter, Ingo</creatorcontrib><creatorcontrib>Sühn, Ursula</creatorcontrib><creatorcontrib>von Eichmann, Joachim</creatorcontrib><creatorcontrib>Möhrle, Christina</creatorcontrib><creatorcontrib>Spencer, Pete Guy</creatorcontrib><creatorcontrib>Ilg, Rüdiger</creatorcontrib><creatorcontrib>Klintwort, Doris</creatorcontrib><creatorcontrib>Lubecki, Daniel</creatorcontrib><creatorcontrib>Marinho, Steffy</creatorcontrib><creatorcontrib>Hogrefe, Katharina</creatorcontrib><creatorcontrib>KommPaS Study Group</creatorcontrib><collection>CrossRef</collection><collection>MEDLINE - Academic</collection><jtitle>Journal of communication disorders</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Ziegler, Wolfram</au><au>Lehner, Katharina</au><au>Klonowski, Madleen</au><au>Geißler, Nadine</au><au>Ammer, Franziska</au><au>Kurfeß, Christina</au><au>Grötzbach, Holger</au><au>Mandl, Alexander</au><au>Knorr, Felicitas</au><au>Strecker, Katrin</au><au>Schölderle, Theresa</au><au>Matern, Sina</au><au>Weck, Christiane</au><au>Gröne, Berthold</au><au>Brühl, Stefanie</au><au>Kirchner, Christiane</au><au>Kleiter, Ingo</au><au>Sühn, Ursula</au><au>von Eichmann, Joachim</au><au>Möhrle, Christina</au><au>Spencer, Pete Guy</au><au>Ilg, Rüdiger</au><au>Klintwort, Doris</au><au>Lubecki, Daniel</au><au>Marinho, Steffy</au><au>Hogrefe, Katharina</au><aucorp>KommPaS Study Group</aucorp><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Crowdsourcing as a tool in the clinical assessment of intelligibility in dysarthria: How to deal with excessive variation</atitle><jtitle>Journal of communication disorders</jtitle><date>2021-09-01</date><risdate>2021</risdate><volume>93</volume><spage>106135</spage><epage>106135</epage><pages>106135-106135</pages><artnum>106135</artnum><issn>0021-9924</issn><eissn>1873-7994</eissn><abstract>•Involvement of laypersons in clinical intelligibility assessment is needed.•Crowdsourcing is a way to involve laypersons in clinical dysarthria assessment.•Excessive variability of crowd scores is constrained by weighted aggregation.•Cost-benefit considerations suggest panels of 9 listeners.•The proposed method immunizes crowd-based intelligibility scores against spamming. Independent laypersons are essential in the assessment of intelligibility in persons with dysarthria (PWD), as they reflect intelligibility limitations in the most ecologically valid way, without being influenced by familiarity with the speaker. The present work investigated online crowdsourcing as a convenient method to involve lay people as listeners, with the objective of exploring how to constrain the expected variability of crowd-based judgements to make them applicable in clinical diagnostics. Intelligibility was assessed using a word transcription task administered via crowdsourcing. In study 1, speech samples of 23 PWD were transcribed by 18 crowdworkers each. Four methods of aggregating the intelligibility scores of randomly sampled panels of 4 to 14 listeners were compared for accuracy, i.e. the stability of the resulting intelligibility estimates across different panels, and their validity, i.e. the degree to which they matched data obtained under controlled laboratory conditions (“gold standard”). In addition, we determined an economically acceptable number of crowdworkers per speaker which is needed to obtain accurate and valid intelligibility estimates. Study 2 examined the robustness of the chosen aggregation method against downward outliers due to spamming in a larger sample of 100 PWD. In study 1, an interworker aggregation method based on negative exponential weightings of the scores as a function of their distance from the “best” listener's score (exponentially weighted mean) outperformed three other methods (median value, arithmetic mean, maximum). Under cost-benefit considerations, an optimum panel size of 9 crowd listeners per examination was determined. Study 2 demonstrated the robustness of this aggregation method against spamming crowd listeners. Though intelligibility data collected through online crowdsourcing are noisy, accurate and valid intelligibility estimates can be obtained by appropriate aggregation of the raw data. This makes crowdsourcing a suitable method for incorporating real-world perspectives into clinical dysarthria assessment.</abstract><pub>Elsevier Inc</pub><doi>10.1016/j.jcomdis.2021.106135</doi><tpages>1</tpages><orcidid>https://orcid.org/0000-0002-5071-8112</orcidid><orcidid>https://orcid.org/0000-0002-5760-1232</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 0021-9924
ispartof Journal of communication disorders, 2021-09, Vol.93, p.106135-106135, Article 106135
issn 0021-9924
1873-7994
language eng
recordid cdi_proquest_miscellaneous_2548399754
source Elsevier ScienceDirect Journals
subjects Crowdsourcing
Dysarthria
Intelligibility
Quality control
Validation
title Crowdsourcing as a tool in the clinical assessment of intelligibility in dysarthria: How to deal with excessive variation
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-05T14%3A22%3A19IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Crowdsourcing%20as%20a%20tool%20in%20the%20clinical%20assessment%20of%20intelligibility%20in%20dysarthria:%20How%20to%20deal%20with%20excessive%20variation&rft.jtitle=Journal%20of%20communication%20disorders&rft.au=Ziegler,%20Wolfram&rft.aucorp=KommPaS%20Study%20Group&rft.date=2021-09-01&rft.volume=93&rft.spage=106135&rft.epage=106135&rft.pages=106135-106135&rft.artnum=106135&rft.issn=0021-9924&rft.eissn=1873-7994&rft_id=info:doi/10.1016/j.jcomdis.2021.106135&rft_dat=%3Cproquest_cross%3E2548399754%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2548399754&rft_id=info:pmid/&rft_els_id=S0021992421000587&rfr_iscdi=true