Validity of content‐based techniques for credibility assessment—How telling is an extended meta‐analysis taking research bias into account?

Summary Content‐based techniques for credibility assessment (Criteria‐Based Content Analysis [CBCA], Reality Monitoring [RM]) have been shown to distinguish between experience‐based and fabricated statements in previous meta‐analyses. New simulations raised the question whether these results are rel...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Applied cognitive psychology 2021-03, Vol.35 (2), p.393-410
Hauptverfasser: Oberlader, Verena A., Quinten, Laura, Banse, Rainer, Volbert, Renate, Schmidt, Alexander F., Schönbrodt, Felix D.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 410
container_issue 2
container_start_page 393
container_title Applied cognitive psychology
container_volume 35
creator Oberlader, Verena A.
Quinten, Laura
Banse, Rainer
Volbert, Renate
Schmidt, Alexander F.
Schönbrodt, Felix D.
description Summary Content‐based techniques for credibility assessment (Criteria‐Based Content Analysis [CBCA], Reality Monitoring [RM]) have been shown to distinguish between experience‐based and fabricated statements in previous meta‐analyses. New simulations raised the question whether these results are reliable revealing that using meta‐analytic methods on biased datasets lead to false‐positive rates of up to 100%. By assessing the performance of and applying different bias‐correcting meta‐analytic methods on a set of 71 studies we aimed for more precise effect size estimates. According to the sole bias‐correcting meta‐analytic method that performed well under a priori specified boundary conditions, CBCA and RM distinguished between experience‐based and fabricated statements. However, great heterogeneity limited precise point estimation (i.e., moderate to large effects). In contrast, Scientific Content Analysis (SCAN)—another content‐based technique tested—failed to discriminate between truth and lies. It is discussed how the gap between research on and forensic application of content‐based credibility assessment may be narrowed.
doi_str_mv 10.1002/acp.3776
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2509241998</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ericid>EJ1290586</ericid><sourcerecordid>2509241998</sourcerecordid><originalsourceid>FETCH-LOGICAL-c3496-40ad98f8aaef686589d53257124e8b43af453405601803eb579270dc534e5b1f3</originalsourceid><addsrcrecordid>eNp1kM9q3DAQh0VpoJuk0BcoCHLpxelItmzpFMKSPy2B9ND2asbyuFHqtbaSlmRveYQE-oR9ksrZkltPA_P7-Jj5MfZOwLEAkB_Rro_LpqlfsYUAYwpoJLxmC9BaFxVoeMP2Y7wFAFMLuWBP33F0vUtb7gdu_ZRoSn8eHjuM1PNE9mZyvzYU-eADt4F617lxpjFGinH1TP--9HeZHUc3_eAucpw43WdRnxUrSph9OOG4jTlL-HOmAkXCYG945zByNyXP0Vq_mdLJIdsbcIz09t88YN_Oz74uL4ur64tPy9OrwpaVqfMv2Bs9aEQaal0rbXpVStUIWZHuqhKHSpUVqBqEhpI61RjZQG_zklQnhvKAHe286-DnF1N76zch3xlbqcDIShijM_VhR9ngYww0tOvgVhi2rYB2LrzNhbdz4Rl9v0MpOPuCnX0W0oDSc17s8js30va_nvZ0-eXZ9xfK_Y_r</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2509241998</pqid></control><display><type>article</type><title>Validity of content‐based techniques for credibility assessment—How telling is an extended meta‐analysis taking research bias into account?</title><source>Applied Social Sciences Index &amp; Abstracts (ASSIA)</source><source>Wiley Online Library Journals Frontfile Complete</source><creator>Oberlader, Verena A. ; Quinten, Laura ; Banse, Rainer ; Volbert, Renate ; Schmidt, Alexander F. ; Schönbrodt, Felix D.</creator><creatorcontrib>Oberlader, Verena A. ; Quinten, Laura ; Banse, Rainer ; Volbert, Renate ; Schmidt, Alexander F. ; Schönbrodt, Felix D.</creatorcontrib><description>Summary Content‐based techniques for credibility assessment (Criteria‐Based Content Analysis [CBCA], Reality Monitoring [RM]) have been shown to distinguish between experience‐based and fabricated statements in previous meta‐analyses. New simulations raised the question whether these results are reliable revealing that using meta‐analytic methods on biased datasets lead to false‐positive rates of up to 100%. By assessing the performance of and applying different bias‐correcting meta‐analytic methods on a set of 71 studies we aimed for more precise effect size estimates. According to the sole bias‐correcting meta‐analytic method that performed well under a priori specified boundary conditions, CBCA and RM distinguished between experience‐based and fabricated statements. However, great heterogeneity limited precise point estimation (i.e., moderate to large effects). In contrast, Scientific Content Analysis (SCAN)—another content‐based technique tested—failed to discriminate between truth and lies. It is discussed how the gap between research on and forensic application of content‐based credibility assessment may be narrowed.</description><identifier>ISSN: 0888-4080</identifier><identifier>EISSN: 1099-0720</identifier><identifier>DOI: 10.1002/acp.3776</identifier><language>eng</language><publisher>Bognor Regis: Wiley</publisher><subject>Bias ; Boundary conditions ; Content Analysis ; Credibility ; credibility assessment ; criteria‐based content analysis ; Evaluation ; Forensic science ; Meta Analysis ; reality monitoring ; scientific content analysis ; Systematic review ; Truth ; Validity</subject><ispartof>Applied cognitive psychology, 2021-03, Vol.35 (2), p.393-410</ispartof><rights>2020 The Authors. published by John Wiley &amp; Sons Ltd.</rights><rights>2020. This article is published under http://creativecommons.org/licenses/by-nc-nd/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c3496-40ad98f8aaef686589d53257124e8b43af453405601803eb579270dc534e5b1f3</citedby><cites>FETCH-LOGICAL-c3496-40ad98f8aaef686589d53257124e8b43af453405601803eb579270dc534e5b1f3</cites><orcidid>0000-0003-1902-1330 ; 0000-0002-8282-3910 ; 0000-0001-6449-2946 ; 0000-0003-0070-5396 ; 0000-0002-3685-0539 ; 0000-0002-8817-9994</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://onlinelibrary.wiley.com/doi/pdf/10.1002%2Facp.3776$$EPDF$$P50$$Gwiley$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://onlinelibrary.wiley.com/doi/full/10.1002%2Facp.3776$$EHTML$$P50$$Gwiley$$Hfree_for_read</linktohtml><link.rule.ids>314,776,780,1411,27901,27902,30976,45550,45551</link.rule.ids><backlink>$$Uhttp://eric.ed.gov/ERICWebPortal/detail?accno=EJ1290586$$DView record in ERIC$$Hfree_for_read</backlink></links><search><creatorcontrib>Oberlader, Verena A.</creatorcontrib><creatorcontrib>Quinten, Laura</creatorcontrib><creatorcontrib>Banse, Rainer</creatorcontrib><creatorcontrib>Volbert, Renate</creatorcontrib><creatorcontrib>Schmidt, Alexander F.</creatorcontrib><creatorcontrib>Schönbrodt, Felix D.</creatorcontrib><title>Validity of content‐based techniques for credibility assessment—How telling is an extended meta‐analysis taking research bias into account?</title><title>Applied cognitive psychology</title><description>Summary Content‐based techniques for credibility assessment (Criteria‐Based Content Analysis [CBCA], Reality Monitoring [RM]) have been shown to distinguish between experience‐based and fabricated statements in previous meta‐analyses. New simulations raised the question whether these results are reliable revealing that using meta‐analytic methods on biased datasets lead to false‐positive rates of up to 100%. By assessing the performance of and applying different bias‐correcting meta‐analytic methods on a set of 71 studies we aimed for more precise effect size estimates. According to the sole bias‐correcting meta‐analytic method that performed well under a priori specified boundary conditions, CBCA and RM distinguished between experience‐based and fabricated statements. However, great heterogeneity limited precise point estimation (i.e., moderate to large effects). In contrast, Scientific Content Analysis (SCAN)—another content‐based technique tested—failed to discriminate between truth and lies. It is discussed how the gap between research on and forensic application of content‐based credibility assessment may be narrowed.</description><subject>Bias</subject><subject>Boundary conditions</subject><subject>Content Analysis</subject><subject>Credibility</subject><subject>credibility assessment</subject><subject>criteria‐based content analysis</subject><subject>Evaluation</subject><subject>Forensic science</subject><subject>Meta Analysis</subject><subject>reality monitoring</subject><subject>scientific content analysis</subject><subject>Systematic review</subject><subject>Truth</subject><subject>Validity</subject><issn>0888-4080</issn><issn>1099-0720</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>24P</sourceid><sourceid>7QJ</sourceid><recordid>eNp1kM9q3DAQh0VpoJuk0BcoCHLpxelItmzpFMKSPy2B9ND2asbyuFHqtbaSlmRveYQE-oR9ksrZkltPA_P7-Jj5MfZOwLEAkB_Rro_LpqlfsYUAYwpoJLxmC9BaFxVoeMP2Y7wFAFMLuWBP33F0vUtb7gdu_ZRoSn8eHjuM1PNE9mZyvzYU-eADt4F617lxpjFGinH1TP--9HeZHUc3_eAucpw43WdRnxUrSph9OOG4jTlL-HOmAkXCYG945zByNyXP0Vq_mdLJIdsbcIz09t88YN_Oz74uL4ur64tPy9OrwpaVqfMv2Bs9aEQaal0rbXpVStUIWZHuqhKHSpUVqBqEhpI61RjZQG_zklQnhvKAHe286-DnF1N76zch3xlbqcDIShijM_VhR9ngYww0tOvgVhi2rYB2LrzNhbdz4Rl9v0MpOPuCnX0W0oDSc17s8js30va_nvZ0-eXZ9xfK_Y_r</recordid><startdate>202103</startdate><enddate>202103</enddate><creator>Oberlader, Verena A.</creator><creator>Quinten, Laura</creator><creator>Banse, Rainer</creator><creator>Volbert, Renate</creator><creator>Schmidt, Alexander F.</creator><creator>Schönbrodt, Felix D.</creator><general>Wiley</general><general>Wiley Subscription Services, Inc</general><scope>24P</scope><scope>7SW</scope><scope>BJH</scope><scope>BNH</scope><scope>BNI</scope><scope>BNJ</scope><scope>BNO</scope><scope>ERI</scope><scope>PET</scope><scope>REK</scope><scope>WWN</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7QJ</scope><scope>7TK</scope><orcidid>https://orcid.org/0000-0003-1902-1330</orcidid><orcidid>https://orcid.org/0000-0002-8282-3910</orcidid><orcidid>https://orcid.org/0000-0001-6449-2946</orcidid><orcidid>https://orcid.org/0000-0003-0070-5396</orcidid><orcidid>https://orcid.org/0000-0002-3685-0539</orcidid><orcidid>https://orcid.org/0000-0002-8817-9994</orcidid></search><sort><creationdate>202103</creationdate><title>Validity of content‐based techniques for credibility assessment—How telling is an extended meta‐analysis taking research bias into account?</title><author>Oberlader, Verena A. ; Quinten, Laura ; Banse, Rainer ; Volbert, Renate ; Schmidt, Alexander F. ; Schönbrodt, Felix D.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c3496-40ad98f8aaef686589d53257124e8b43af453405601803eb579270dc534e5b1f3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Bias</topic><topic>Boundary conditions</topic><topic>Content Analysis</topic><topic>Credibility</topic><topic>credibility assessment</topic><topic>criteria‐based content analysis</topic><topic>Evaluation</topic><topic>Forensic science</topic><topic>Meta Analysis</topic><topic>reality monitoring</topic><topic>scientific content analysis</topic><topic>Systematic review</topic><topic>Truth</topic><topic>Validity</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Oberlader, Verena A.</creatorcontrib><creatorcontrib>Quinten, Laura</creatorcontrib><creatorcontrib>Banse, Rainer</creatorcontrib><creatorcontrib>Volbert, Renate</creatorcontrib><creatorcontrib>Schmidt, Alexander F.</creatorcontrib><creatorcontrib>Schönbrodt, Felix D.</creatorcontrib><collection>Wiley Online Library Open Access</collection><collection>ERIC</collection><collection>ERIC (Ovid)</collection><collection>ERIC</collection><collection>ERIC</collection><collection>ERIC (Legacy Platform)</collection><collection>ERIC( SilverPlatter )</collection><collection>ERIC</collection><collection>ERIC PlusText (Legacy Platform)</collection><collection>Education Resources Information Center (ERIC)</collection><collection>ERIC</collection><collection>CrossRef</collection><collection>Applied Social Sciences Index &amp; Abstracts (ASSIA)</collection><collection>Neurosciences Abstracts</collection><jtitle>Applied cognitive psychology</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Oberlader, Verena A.</au><au>Quinten, Laura</au><au>Banse, Rainer</au><au>Volbert, Renate</au><au>Schmidt, Alexander F.</au><au>Schönbrodt, Felix D.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><ericid>EJ1290586</ericid><atitle>Validity of content‐based techniques for credibility assessment—How telling is an extended meta‐analysis taking research bias into account?</atitle><jtitle>Applied cognitive psychology</jtitle><date>2021-03</date><risdate>2021</risdate><volume>35</volume><issue>2</issue><spage>393</spage><epage>410</epage><pages>393-410</pages><issn>0888-4080</issn><eissn>1099-0720</eissn><abstract>Summary Content‐based techniques for credibility assessment (Criteria‐Based Content Analysis [CBCA], Reality Monitoring [RM]) have been shown to distinguish between experience‐based and fabricated statements in previous meta‐analyses. New simulations raised the question whether these results are reliable revealing that using meta‐analytic methods on biased datasets lead to false‐positive rates of up to 100%. By assessing the performance of and applying different bias‐correcting meta‐analytic methods on a set of 71 studies we aimed for more precise effect size estimates. According to the sole bias‐correcting meta‐analytic method that performed well under a priori specified boundary conditions, CBCA and RM distinguished between experience‐based and fabricated statements. However, great heterogeneity limited precise point estimation (i.e., moderate to large effects). In contrast, Scientific Content Analysis (SCAN)—another content‐based technique tested—failed to discriminate between truth and lies. It is discussed how the gap between research on and forensic application of content‐based credibility assessment may be narrowed.</abstract><cop>Bognor Regis</cop><pub>Wiley</pub><doi>10.1002/acp.3776</doi><tpages>18</tpages><orcidid>https://orcid.org/0000-0003-1902-1330</orcidid><orcidid>https://orcid.org/0000-0002-8282-3910</orcidid><orcidid>https://orcid.org/0000-0001-6449-2946</orcidid><orcidid>https://orcid.org/0000-0003-0070-5396</orcidid><orcidid>https://orcid.org/0000-0002-3685-0539</orcidid><orcidid>https://orcid.org/0000-0002-8817-9994</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 0888-4080
ispartof Applied cognitive psychology, 2021-03, Vol.35 (2), p.393-410
issn 0888-4080
1099-0720
language eng
recordid cdi_proquest_journals_2509241998
source Applied Social Sciences Index & Abstracts (ASSIA); Wiley Online Library Journals Frontfile Complete
subjects Bias
Boundary conditions
Content Analysis
Credibility
credibility assessment
criteria‐based content analysis
Evaluation
Forensic science
Meta Analysis
reality monitoring
scientific content analysis
Systematic review
Truth
Validity
title Validity of content‐based techniques for credibility assessment—How telling is an extended meta‐analysis taking research bias into account?
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-03T14%3A50%3A35IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Validity%20of%20content%E2%80%90based%20techniques%20for%20credibility%20assessment%E2%80%94How%20telling%20is%20an%20extended%20meta%E2%80%90analysis%20taking%20research%20bias%20into%20account?&rft.jtitle=Applied%20cognitive%20psychology&rft.au=Oberlader,%20Verena%20A.&rft.date=2021-03&rft.volume=35&rft.issue=2&rft.spage=393&rft.epage=410&rft.pages=393-410&rft.issn=0888-4080&rft.eissn=1099-0720&rft_id=info:doi/10.1002/acp.3776&rft_dat=%3Cproquest_cross%3E2509241998%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2509241998&rft_id=info:pmid/&rft_ericid=EJ1290586&rfr_iscdi=true