Every Second Counts: A Comparison of Four Dot Counting Test Scoring Procedures for Detecting Invalid Neuropsychological Test Performance

Although performance validity tests (PVTs) are an integral element of neuropsychological assessment, most PVTs have historically been restricted to the memory domain. The Dot Counting Test (DCT) is a nonmemory PVT shown to reliably identify invalid performance. Although several traditional and abbre...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Psychological assessment 2021-02, Vol.33 (2), p.133-141
Hauptverfasser: Rhoads, Tasha, Resch, Zachary J., Ovsiew, Gabriel P., White, Daniel J., Abramson, Dayna A., Soble, Jason R.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 141
container_issue 2
container_start_page 133
container_title Psychological assessment
container_volume 33
creator Rhoads, Tasha
Resch, Zachary J.
Ovsiew, Gabriel P.
White, Daniel J.
Abramson, Dayna A.
Soble, Jason R.
description Although performance validity tests (PVTs) are an integral element of neuropsychological assessment, most PVTs have historically been restricted to the memory domain. The Dot Counting Test (DCT) is a nonmemory PVT shown to reliably identify invalid performance. Although several traditional and abbreviated scoring methods have been derived, no study to date has directly compared the available scoring approaches within a single sample. This cross-sectional study cross-validated 4 different DCT scoring approaches, including the traditional rounded E-score proposed within the manual, an unrounded E-score, and 2 abbreviated scoring procedures based on 4- and 6-card versions (DCT-4 and DCT-6, respectively) in a diverse mixed clinical neuropsychiatric sample (N = 132). Validity groups were established by 5 independent criterion PVTs (102 valid and 30 invalid). Receiver operating characteristic curve analyses yielded significant areas under the curve (AUCs = .84−.86) for the overall sample, with sensitivities of 50%-67% at ≥89% specificity. The DCT scores had outstanding classification accuracy (AUCs ≥ .92; sensitivities = 80%−83%) in the unimpaired group and excellent classification accuracy in the impaired group (AUCs = .79−.81; sensitivities = 43%−60%). Whereas negligible differences emerged between the 4 scoring methods for the cognitively intact group, the DCT-4 showed notably stronger psychometric properties among the overall sample in general and the mild cognitive impairment group in particular. Results corroborate previous findings suggesting that the DCT is a robust PVT, regardless of the employed scoring procedure, and replicate support for the abbreviated DCT-4 as the recommended validity indicator. Public Significance Statement This study provided further cross-validation support for the clinical utility of the Dot Counting Test (DCT) as a brief, nonmemory performance validity test among a diverse neuropsychiatric population and replicated recent novel findings regarding abbreviated DCT scoring approaches that allow for more accurate assessment of validity status while minimizing evaluation time burden and associated costs.
doi_str_mv 10.1037/pas0000970
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_2456419139</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2456419139</sourcerecordid><originalsourceid>FETCH-LOGICAL-a379t-50e7daa6cf18033003d3940eb05699a1da437f2861cf58374dda3297982548413</originalsourceid><addsrcrecordid>eNp90c9qFjEQAPAgiv2jFx9AAl5EWU12km8Tb-Wz1ULRQit4C2kyW7fsJttkt_C9QR_brFsVPJhLJvDLzDBDyAvO3nEGzfvRZlaObtgjss816IqD-P64xEywCqRme-Qg5xvGuAAln5I9AF5co_bJ_fEdph29QBeDp9s4hyl_oEclGkabuhwDjS09iXOiH-O0gi5c00vME71wMS2P8xQd-jlhpm0sECd0v9RpuLN95-kXnFMc8879iH287pzt1wTnmMqHwQaHz8iT1vYZnz_ch-TbyfHl9nN19vXT6fborLLQ6KmSDBtv7ca1XDEAxsCDFgyvmNxobbm3Apq2VhvuWqmgEd5bqHWjVS2FEhwOyes175ji7VyaMEOXHfa9DRjnbGohN4JrDrrQV__QmzKHULoztWSljgL9fyWkFIJLtZR9syqXYs4JWzOmbrBpZzgzyxbN3y0W_PIh5Xw1oP9Df6-tgLcrsKM1y1xtmjrXY3ZzShimJVnRpjYcAH4CrsWmMg</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2455441581</pqid></control><display><type>article</type><title>Every Second Counts: A Comparison of Four Dot Counting Test Scoring Procedures for Detecting Invalid Neuropsychological Test Performance</title><source>APA PsycARTICLES</source><creator>Rhoads, Tasha ; Resch, Zachary J. ; Ovsiew, Gabriel P. ; White, Daniel J. ; Abramson, Dayna A. ; Soble, Jason R.</creator><contributor>Ben-Porath, Yossef S ; Suhr, Julie</contributor><creatorcontrib>Rhoads, Tasha ; Resch, Zachary J. ; Ovsiew, Gabriel P. ; White, Daniel J. ; Abramson, Dayna A. ; Soble, Jason R. ; Ben-Porath, Yossef S ; Suhr, Julie</creatorcontrib><description>Although performance validity tests (PVTs) are an integral element of neuropsychological assessment, most PVTs have historically been restricted to the memory domain. The Dot Counting Test (DCT) is a nonmemory PVT shown to reliably identify invalid performance. Although several traditional and abbreviated scoring methods have been derived, no study to date has directly compared the available scoring approaches within a single sample. This cross-sectional study cross-validated 4 different DCT scoring approaches, including the traditional rounded E-score proposed within the manual, an unrounded E-score, and 2 abbreviated scoring procedures based on 4- and 6-card versions (DCT-4 and DCT-6, respectively) in a diverse mixed clinical neuropsychiatric sample (N = 132). Validity groups were established by 5 independent criterion PVTs (102 valid and 30 invalid). Receiver operating characteristic curve analyses yielded significant areas under the curve (AUCs = .84−.86) for the overall sample, with sensitivities of 50%-67% at ≥89% specificity. The DCT scores had outstanding classification accuracy (AUCs ≥ .92; sensitivities = 80%−83%) in the unimpaired group and excellent classification accuracy in the impaired group (AUCs = .79−.81; sensitivities = 43%−60%). Whereas negligible differences emerged between the 4 scoring methods for the cognitively intact group, the DCT-4 showed notably stronger psychometric properties among the overall sample in general and the mild cognitive impairment group in particular. Results corroborate previous findings suggesting that the DCT is a robust PVT, regardless of the employed scoring procedure, and replicate support for the abbreviated DCT-4 as the recommended validity indicator. Public Significance Statement This study provided further cross-validation support for the clinical utility of the Dot Counting Test (DCT) as a brief, nonmemory performance validity test among a diverse neuropsychiatric population and replicated recent novel findings regarding abbreviated DCT scoring approaches that allow for more accurate assessment of validity status while minimizing evaluation time burden and associated costs.</description><identifier>ISSN: 1040-3590</identifier><identifier>EISSN: 1939-134X</identifier><identifier>DOI: 10.1037/pas0000970</identifier><identifier>PMID: 33119378</identifier><language>eng</language><publisher>United States: American Psychological Association</publisher><subject>Cognitive ability ; Credit scoring ; Female ; Human ; Male ; Medical diagnosis ; Memory ; Mild Cognitive Impairment ; Neuropsychological Assessment ; Neuropsychology ; Psychometrics ; Quantitative psychology ; Scoring (Testing) ; Studies ; Test Reliability ; Test Validity</subject><ispartof>Psychological assessment, 2021-02, Vol.33 (2), p.133-141</ispartof><rights>2020 American Psychological Association</rights><rights>2020, American Psychological Association</rights><rights>Copyright American Psychological Association Feb 2021</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-a379t-50e7daa6cf18033003d3940eb05699a1da437f2861cf58374dda3297982548413</citedby><orcidid>0000-0002-3199-6168 ; 0000-0001-9708-0083</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,776,780,27903,27904</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/33119378$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><contributor>Ben-Porath, Yossef S</contributor><contributor>Suhr, Julie</contributor><creatorcontrib>Rhoads, Tasha</creatorcontrib><creatorcontrib>Resch, Zachary J.</creatorcontrib><creatorcontrib>Ovsiew, Gabriel P.</creatorcontrib><creatorcontrib>White, Daniel J.</creatorcontrib><creatorcontrib>Abramson, Dayna A.</creatorcontrib><creatorcontrib>Soble, Jason R.</creatorcontrib><title>Every Second Counts: A Comparison of Four Dot Counting Test Scoring Procedures for Detecting Invalid Neuropsychological Test Performance</title><title>Psychological assessment</title><addtitle>Psychol Assess</addtitle><description>Although performance validity tests (PVTs) are an integral element of neuropsychological assessment, most PVTs have historically been restricted to the memory domain. The Dot Counting Test (DCT) is a nonmemory PVT shown to reliably identify invalid performance. Although several traditional and abbreviated scoring methods have been derived, no study to date has directly compared the available scoring approaches within a single sample. This cross-sectional study cross-validated 4 different DCT scoring approaches, including the traditional rounded E-score proposed within the manual, an unrounded E-score, and 2 abbreviated scoring procedures based on 4- and 6-card versions (DCT-4 and DCT-6, respectively) in a diverse mixed clinical neuropsychiatric sample (N = 132). Validity groups were established by 5 independent criterion PVTs (102 valid and 30 invalid). Receiver operating characteristic curve analyses yielded significant areas under the curve (AUCs = .84−.86) for the overall sample, with sensitivities of 50%-67% at ≥89% specificity. The DCT scores had outstanding classification accuracy (AUCs ≥ .92; sensitivities = 80%−83%) in the unimpaired group and excellent classification accuracy in the impaired group (AUCs = .79−.81; sensitivities = 43%−60%). Whereas negligible differences emerged between the 4 scoring methods for the cognitively intact group, the DCT-4 showed notably stronger psychometric properties among the overall sample in general and the mild cognitive impairment group in particular. Results corroborate previous findings suggesting that the DCT is a robust PVT, regardless of the employed scoring procedure, and replicate support for the abbreviated DCT-4 as the recommended validity indicator. Public Significance Statement This study provided further cross-validation support for the clinical utility of the Dot Counting Test (DCT) as a brief, nonmemory performance validity test among a diverse neuropsychiatric population and replicated recent novel findings regarding abbreviated DCT scoring approaches that allow for more accurate assessment of validity status while minimizing evaluation time burden and associated costs.</description><subject>Cognitive ability</subject><subject>Credit scoring</subject><subject>Female</subject><subject>Human</subject><subject>Male</subject><subject>Medical diagnosis</subject><subject>Memory</subject><subject>Mild Cognitive Impairment</subject><subject>Neuropsychological Assessment</subject><subject>Neuropsychology</subject><subject>Psychometrics</subject><subject>Quantitative psychology</subject><subject>Scoring (Testing)</subject><subject>Studies</subject><subject>Test Reliability</subject><subject>Test Validity</subject><issn>1040-3590</issn><issn>1939-134X</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><recordid>eNp90c9qFjEQAPAgiv2jFx9AAl5EWU12km8Tb-Wz1ULRQit4C2kyW7fsJttkt_C9QR_brFsVPJhLJvDLzDBDyAvO3nEGzfvRZlaObtgjss816IqD-P64xEywCqRme-Qg5xvGuAAln5I9AF5co_bJ_fEdph29QBeDp9s4hyl_oEclGkabuhwDjS09iXOiH-O0gi5c00vME71wMS2P8xQd-jlhpm0sECd0v9RpuLN95-kXnFMc8879iH287pzt1wTnmMqHwQaHz8iT1vYZnz_ch-TbyfHl9nN19vXT6fborLLQ6KmSDBtv7ca1XDEAxsCDFgyvmNxobbm3Apq2VhvuWqmgEd5bqHWjVS2FEhwOyes175ji7VyaMEOXHfa9DRjnbGohN4JrDrrQV__QmzKHULoztWSljgL9fyWkFIJLtZR9syqXYs4JWzOmbrBpZzgzyxbN3y0W_PIh5Xw1oP9Df6-tgLcrsKM1y1xtmjrXY3ZzShimJVnRpjYcAH4CrsWmMg</recordid><startdate>202102</startdate><enddate>202102</enddate><creator>Rhoads, Tasha</creator><creator>Resch, Zachary J.</creator><creator>Ovsiew, Gabriel P.</creator><creator>White, Daniel J.</creator><creator>Abramson, Dayna A.</creator><creator>Soble, Jason R.</creator><general>American Psychological Association</general><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7RZ</scope><scope>PSYQQ</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0002-3199-6168</orcidid><orcidid>https://orcid.org/0000-0001-9708-0083</orcidid></search><sort><creationdate>202102</creationdate><title>Every Second Counts: A Comparison of Four Dot Counting Test Scoring Procedures for Detecting Invalid Neuropsychological Test Performance</title><author>Rhoads, Tasha ; Resch, Zachary J. ; Ovsiew, Gabriel P. ; White, Daniel J. ; Abramson, Dayna A. ; Soble, Jason R.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a379t-50e7daa6cf18033003d3940eb05699a1da437f2861cf58374dda3297982548413</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Cognitive ability</topic><topic>Credit scoring</topic><topic>Female</topic><topic>Human</topic><topic>Male</topic><topic>Medical diagnosis</topic><topic>Memory</topic><topic>Mild Cognitive Impairment</topic><topic>Neuropsychological Assessment</topic><topic>Neuropsychology</topic><topic>Psychometrics</topic><topic>Quantitative psychology</topic><topic>Scoring (Testing)</topic><topic>Studies</topic><topic>Test Reliability</topic><topic>Test Validity</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Rhoads, Tasha</creatorcontrib><creatorcontrib>Resch, Zachary J.</creatorcontrib><creatorcontrib>Ovsiew, Gabriel P.</creatorcontrib><creatorcontrib>White, Daniel J.</creatorcontrib><creatorcontrib>Abramson, Dayna A.</creatorcontrib><creatorcontrib>Soble, Jason R.</creatorcontrib><collection>PubMed</collection><collection>CrossRef</collection><collection>APA PsycArticles®</collection><collection>ProQuest One Psychology</collection><collection>MEDLINE - Academic</collection><jtitle>Psychological assessment</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Rhoads, Tasha</au><au>Resch, Zachary J.</au><au>Ovsiew, Gabriel P.</au><au>White, Daniel J.</au><au>Abramson, Dayna A.</au><au>Soble, Jason R.</au><au>Ben-Porath, Yossef S</au><au>Suhr, Julie</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Every Second Counts: A Comparison of Four Dot Counting Test Scoring Procedures for Detecting Invalid Neuropsychological Test Performance</atitle><jtitle>Psychological assessment</jtitle><addtitle>Psychol Assess</addtitle><date>2021-02</date><risdate>2021</risdate><volume>33</volume><issue>2</issue><spage>133</spage><epage>141</epage><pages>133-141</pages><issn>1040-3590</issn><eissn>1939-134X</eissn><abstract>Although performance validity tests (PVTs) are an integral element of neuropsychological assessment, most PVTs have historically been restricted to the memory domain. The Dot Counting Test (DCT) is a nonmemory PVT shown to reliably identify invalid performance. Although several traditional and abbreviated scoring methods have been derived, no study to date has directly compared the available scoring approaches within a single sample. This cross-sectional study cross-validated 4 different DCT scoring approaches, including the traditional rounded E-score proposed within the manual, an unrounded E-score, and 2 abbreviated scoring procedures based on 4- and 6-card versions (DCT-4 and DCT-6, respectively) in a diverse mixed clinical neuropsychiatric sample (N = 132). Validity groups were established by 5 independent criterion PVTs (102 valid and 30 invalid). Receiver operating characteristic curve analyses yielded significant areas under the curve (AUCs = .84−.86) for the overall sample, with sensitivities of 50%-67% at ≥89% specificity. The DCT scores had outstanding classification accuracy (AUCs ≥ .92; sensitivities = 80%−83%) in the unimpaired group and excellent classification accuracy in the impaired group (AUCs = .79−.81; sensitivities = 43%−60%). Whereas negligible differences emerged between the 4 scoring methods for the cognitively intact group, the DCT-4 showed notably stronger psychometric properties among the overall sample in general and the mild cognitive impairment group in particular. Results corroborate previous findings suggesting that the DCT is a robust PVT, regardless of the employed scoring procedure, and replicate support for the abbreviated DCT-4 as the recommended validity indicator. Public Significance Statement This study provided further cross-validation support for the clinical utility of the Dot Counting Test (DCT) as a brief, nonmemory performance validity test among a diverse neuropsychiatric population and replicated recent novel findings regarding abbreviated DCT scoring approaches that allow for more accurate assessment of validity status while minimizing evaluation time burden and associated costs.</abstract><cop>United States</cop><pub>American Psychological Association</pub><pmid>33119378</pmid><doi>10.1037/pas0000970</doi><tpages>9</tpages><orcidid>https://orcid.org/0000-0002-3199-6168</orcidid><orcidid>https://orcid.org/0000-0001-9708-0083</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 1040-3590
ispartof Psychological assessment, 2021-02, Vol.33 (2), p.133-141
issn 1040-3590
1939-134X
language eng
recordid cdi_proquest_miscellaneous_2456419139
source APA PsycARTICLES
subjects Cognitive ability
Credit scoring
Female
Human
Male
Medical diagnosis
Memory
Mild Cognitive Impairment
Neuropsychological Assessment
Neuropsychology
Psychometrics
Quantitative psychology
Scoring (Testing)
Studies
Test Reliability
Test Validity
title Every Second Counts: A Comparison of Four Dot Counting Test Scoring Procedures for Detecting Invalid Neuropsychological Test Performance
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-26T11%3A02%3A52IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Every%20Second%20Counts:%20A%20Comparison%20of%20Four%20Dot%20Counting%20Test%20Scoring%20Procedures%20for%20Detecting%20Invalid%20Neuropsychological%20Test%20Performance&rft.jtitle=Psychological%20assessment&rft.au=Rhoads,%20Tasha&rft.date=2021-02&rft.volume=33&rft.issue=2&rft.spage=133&rft.epage=141&rft.pages=133-141&rft.issn=1040-3590&rft.eissn=1939-134X&rft_id=info:doi/10.1037/pas0000970&rft_dat=%3Cproquest_cross%3E2456419139%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2455441581&rft_id=info:pmid/33119378&rfr_iscdi=true