Deep learning-based classification of posttraumatic stress disorder and depression following trauma utilizing visual and auditory markers of arousal and mood
Visual and auditory signs of patient functioning have long been used for clinical diagnosis, treatment selection, and prognosis. Direct measurement and quantification of these signals can aim to improve the consistency, sensitivity, and scalability of clinical assessment. Currently, we investigate i...
Gespeichert in:
Veröffentlicht in: | Psychological medicine 2022-04, Vol.52 (5), p.957-967 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 967 |
---|---|
container_issue | 5 |
container_start_page | 957 |
container_title | Psychological medicine |
container_volume | 52 |
creator | Schultebraucks, Katharina Yadav, Vijay Shalev, Arieh Y. Bonanno, George A. Galatzer-Levy, Isaac R. |
description | Visual and auditory signs of patient functioning have long been used for clinical diagnosis, treatment selection, and prognosis. Direct measurement and quantification of these signals can aim to improve the consistency, sensitivity, and scalability of clinical assessment. Currently, we investigate if machine learning-based computer vision (CV), semantic, and acoustic analysis can capture clinical features from free speech responses to a brief interview 1 month post-trauma that accurately classify major depressive disorder (MDD) and posttraumatic stress disorder (PTSD).
N = 81 patients admitted to an emergency department (ED) of a Level-1 Trauma Unit following a life-threatening traumatic event participated in an open-ended qualitative interview with a para-professional about their experience 1 month following admission. A deep neural network was utilized to extract facial features of emotion and their intensity, movement parameters, speech prosody, and natural language content. These features were utilized as inputs to classify PTSD and MDD cross-sectionally.
Both video- and audio-based markers contributed to good discriminatory classification accuracy. The algorithm discriminates PTSD status at 1 month after ED admission with an AUC of 0.90 (weighted average precision = 0.83, recall = 0.84, and f1-score = 0.83) as well as depression status at 1 month after ED admission with an AUC of 0.86 (weighted average precision = 0.83, recall = 0.82, and f1-score = 0.82).
Direct clinical observation during post-trauma free speech using deep learning identifies digital markers that can be utilized to classify MDD and PTSD status. |
doi_str_mv | 10.1017/S0033291720002718 |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_2430094953</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><cupid>10_1017_S0033291720002718</cupid><sourcerecordid>2647726108</sourcerecordid><originalsourceid>FETCH-LOGICAL-c454t-56a0ec227a4248caeca7081bfe9da6d767f18ccf03315fbde779cc67a87426923</originalsourceid><addsrcrecordid>eNp1Uctu1TAQtRCIXgofwAZZYsMmYDu-cbJE5SlVYkFZRxN7XLk4cfAkoPIv_Gud3gtIVKwsndfM-DD2VIqXUkjz6rMQda06aZQQQhnZ3mM7qZuuajvT3me7ja42_oQ9IroSQtZSq4fspFZGayXkjv16gzjziJCnMF1WAxA6biMQBR8sLCFNPHk-J1qWDOtYEMtpyUjEXaCUHWYOk-MO5w3c9D7FmH6UOH6w8HUJMfzcgO-BVoi3BlhdWFK-5iPkr5hpGwM5rXTkx5TcY_bAQyR8cnxP2Zd3by_OPlTnn95_PHt9Xlm910u1b0CgVcqAVrq1gBaMaOXgsXPQONMYL1trffkOufeDQ2M6axsDrdGq6VR9yl4ccuecvq1ISz8GshgjTFg26pWuheh0t6-L9Pk_0qu05qls16tGG6MaKdqikgeVzYkoo-_nHMqh170U_dZdf6e74nl2TF6HEd0fx--yiqA-hsI45OAu8e_s_8feAK6gpxo</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2647726108</pqid></control><display><type>article</type><title>Deep learning-based classification of posttraumatic stress disorder and depression following trauma utilizing visual and auditory markers of arousal and mood</title><source>MEDLINE</source><source>Applied Social Sciences Index & Abstracts (ASSIA)</source><source>Cambridge University Press Journals Complete</source><creator>Schultebraucks, Katharina ; Yadav, Vijay ; Shalev, Arieh Y. ; Bonanno, George A. ; Galatzer-Levy, Isaac R.</creator><creatorcontrib>Schultebraucks, Katharina ; Yadav, Vijay ; Shalev, Arieh Y. ; Bonanno, George A. ; Galatzer-Levy, Isaac R.</creatorcontrib><description>Visual and auditory signs of patient functioning have long been used for clinical diagnosis, treatment selection, and prognosis. Direct measurement and quantification of these signals can aim to improve the consistency, sensitivity, and scalability of clinical assessment. Currently, we investigate if machine learning-based computer vision (CV), semantic, and acoustic analysis can capture clinical features from free speech responses to a brief interview 1 month post-trauma that accurately classify major depressive disorder (MDD) and posttraumatic stress disorder (PTSD).
N = 81 patients admitted to an emergency department (ED) of a Level-1 Trauma Unit following a life-threatening traumatic event participated in an open-ended qualitative interview with a para-professional about their experience 1 month following admission. A deep neural network was utilized to extract facial features of emotion and their intensity, movement parameters, speech prosody, and natural language content. These features were utilized as inputs to classify PTSD and MDD cross-sectionally.
Both video- and audio-based markers contributed to good discriminatory classification accuracy. The algorithm discriminates PTSD status at 1 month after ED admission with an AUC of 0.90 (weighted average precision = 0.83, recall = 0.84, and f1-score = 0.83) as well as depression status at 1 month after ED admission with an AUC of 0.86 (weighted average precision = 0.83, recall = 0.82, and f1-score = 0.82).
Direct clinical observation during post-trauma free speech using deep learning identifies digital markers that can be utilized to classify MDD and PTSD status.</description><identifier>ISSN: 0033-2917</identifier><identifier>EISSN: 1469-8978</identifier><identifier>DOI: 10.1017/S0033291720002718</identifier><identifier>PMID: 32744201</identifier><language>eng</language><publisher>Cambridge, UK: Cambridge University Press</publisher><subject>Acoustics ; Arousal ; Biomarkers ; Classification ; Clinical assessment ; Clinical skills ; Computer vision ; Deep Learning ; Depression ; Depressive Disorder, Major - diagnosis ; Depressive Disorder, Major - psychology ; Depressive personality disorders ; Emergency medical care ; Emergency services ; Emotions ; Freedom of speech ; Humans ; Interviews ; Life threatening ; Measurement ; Medical diagnosis ; Medical prognosis ; Mental depression ; Mental disorders ; Natural language ; Neural networks ; Observation ; Original Article ; Patients ; Physical characteristics ; Post traumatic stress disorder ; Prosody ; Psychopathology ; Sensory integration ; Speech ; Stress Disorders, Post-Traumatic - diagnosis ; Stress Disorders, Post-Traumatic - psychology ; Trauma ; Traumatic life events</subject><ispartof>Psychological medicine, 2022-04, Vol.52 (5), p.957-967</ispartof><rights>Copyright © The Author(s), 2020. Published by Cambridge University Press</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c454t-56a0ec227a4248caeca7081bfe9da6d767f18ccf03315fbde779cc67a87426923</citedby><cites>FETCH-LOGICAL-c454t-56a0ec227a4248caeca7081bfe9da6d767f18ccf03315fbde779cc67a87426923</cites><orcidid>0000-0001-5085-8249</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.cambridge.org/core/product/identifier/S0033291720002718/type/journal_article$$EHTML$$P50$$Gcambridge$$H</linktohtml><link.rule.ids>164,314,780,784,12846,27924,27925,30999,55628</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/32744201$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Schultebraucks, Katharina</creatorcontrib><creatorcontrib>Yadav, Vijay</creatorcontrib><creatorcontrib>Shalev, Arieh Y.</creatorcontrib><creatorcontrib>Bonanno, George A.</creatorcontrib><creatorcontrib>Galatzer-Levy, Isaac R.</creatorcontrib><title>Deep learning-based classification of posttraumatic stress disorder and depression following trauma utilizing visual and auditory markers of arousal and mood</title><title>Psychological medicine</title><addtitle>Psychol. Med</addtitle><description>Visual and auditory signs of patient functioning have long been used for clinical diagnosis, treatment selection, and prognosis. Direct measurement and quantification of these signals can aim to improve the consistency, sensitivity, and scalability of clinical assessment. Currently, we investigate if machine learning-based computer vision (CV), semantic, and acoustic analysis can capture clinical features from free speech responses to a brief interview 1 month post-trauma that accurately classify major depressive disorder (MDD) and posttraumatic stress disorder (PTSD).
N = 81 patients admitted to an emergency department (ED) of a Level-1 Trauma Unit following a life-threatening traumatic event participated in an open-ended qualitative interview with a para-professional about their experience 1 month following admission. A deep neural network was utilized to extract facial features of emotion and their intensity, movement parameters, speech prosody, and natural language content. These features were utilized as inputs to classify PTSD and MDD cross-sectionally.
Both video- and audio-based markers contributed to good discriminatory classification accuracy. The algorithm discriminates PTSD status at 1 month after ED admission with an AUC of 0.90 (weighted average precision = 0.83, recall = 0.84, and f1-score = 0.83) as well as depression status at 1 month after ED admission with an AUC of 0.86 (weighted average precision = 0.83, recall = 0.82, and f1-score = 0.82).
Direct clinical observation during post-trauma free speech using deep learning identifies digital markers that can be utilized to classify MDD and PTSD status.</description><subject>Acoustics</subject><subject>Arousal</subject><subject>Biomarkers</subject><subject>Classification</subject><subject>Clinical assessment</subject><subject>Clinical skills</subject><subject>Computer vision</subject><subject>Deep Learning</subject><subject>Depression</subject><subject>Depressive Disorder, Major - diagnosis</subject><subject>Depressive Disorder, Major - psychology</subject><subject>Depressive personality disorders</subject><subject>Emergency medical care</subject><subject>Emergency services</subject><subject>Emotions</subject><subject>Freedom of speech</subject><subject>Humans</subject><subject>Interviews</subject><subject>Life threatening</subject><subject>Measurement</subject><subject>Medical diagnosis</subject><subject>Medical prognosis</subject><subject>Mental depression</subject><subject>Mental disorders</subject><subject>Natural language</subject><subject>Neural networks</subject><subject>Observation</subject><subject>Original Article</subject><subject>Patients</subject><subject>Physical characteristics</subject><subject>Post traumatic stress disorder</subject><subject>Prosody</subject><subject>Psychopathology</subject><subject>Sensory integration</subject><subject>Speech</subject><subject>Stress Disorders, Post-Traumatic - diagnosis</subject><subject>Stress Disorders, Post-Traumatic - psychology</subject><subject>Trauma</subject><subject>Traumatic life events</subject><issn>0033-2917</issn><issn>1469-8978</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>EIF</sourceid><sourceid>7QJ</sourceid><sourceid>8G5</sourceid><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GNUQQ</sourceid><sourceid>GUQSH</sourceid><sourceid>M2O</sourceid><recordid>eNp1Uctu1TAQtRCIXgofwAZZYsMmYDu-cbJE5SlVYkFZRxN7XLk4cfAkoPIv_Gud3gtIVKwsndfM-DD2VIqXUkjz6rMQda06aZQQQhnZ3mM7qZuuajvT3me7ja42_oQ9IroSQtZSq4fspFZGayXkjv16gzjziJCnMF1WAxA6biMQBR8sLCFNPHk-J1qWDOtYEMtpyUjEXaCUHWYOk-MO5w3c9D7FmH6UOH6w8HUJMfzcgO-BVoi3BlhdWFK-5iPkr5hpGwM5rXTkx5TcY_bAQyR8cnxP2Zd3by_OPlTnn95_PHt9Xlm910u1b0CgVcqAVrq1gBaMaOXgsXPQONMYL1trffkOufeDQ2M6axsDrdGq6VR9yl4ccuecvq1ISz8GshgjTFg26pWuheh0t6-L9Pk_0qu05qls16tGG6MaKdqikgeVzYkoo-_nHMqh170U_dZdf6e74nl2TF6HEd0fx--yiqA-hsI45OAu8e_s_8feAK6gpxo</recordid><startdate>20220401</startdate><enddate>20220401</enddate><creator>Schultebraucks, Katharina</creator><creator>Yadav, Vijay</creator><creator>Shalev, Arieh Y.</creator><creator>Bonanno, George A.</creator><creator>Galatzer-Levy, Isaac R.</creator><general>Cambridge University Press</general><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>0-V</scope><scope>3V.</scope><scope>7QJ</scope><scope>7QP</scope><scope>7QR</scope><scope>7RV</scope><scope>7TK</scope><scope>7X7</scope><scope>7XB</scope><scope>88E</scope><scope>88G</scope><scope>8FD</scope><scope>8FI</scope><scope>8FJ</scope><scope>8FK</scope><scope>8G5</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ALSLI</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FR3</scope><scope>FYUFA</scope><scope>GHDGH</scope><scope>GNUQQ</scope><scope>GUQSH</scope><scope>HEHIP</scope><scope>K9.</scope><scope>KB0</scope><scope>M0S</scope><scope>M1P</scope><scope>M2M</scope><scope>M2O</scope><scope>M2S</scope><scope>MBDVC</scope><scope>NAPCQ</scope><scope>P64</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PSYQQ</scope><scope>Q9U</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0001-5085-8249</orcidid></search><sort><creationdate>20220401</creationdate><title>Deep learning-based classification of posttraumatic stress disorder and depression following trauma utilizing visual and auditory markers of arousal and mood</title><author>Schultebraucks, Katharina ; Yadav, Vijay ; Shalev, Arieh Y. ; Bonanno, George A. ; Galatzer-Levy, Isaac R.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c454t-56a0ec227a4248caeca7081bfe9da6d767f18ccf03315fbde779cc67a87426923</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Acoustics</topic><topic>Arousal</topic><topic>Biomarkers</topic><topic>Classification</topic><topic>Clinical assessment</topic><topic>Clinical skills</topic><topic>Computer vision</topic><topic>Deep Learning</topic><topic>Depression</topic><topic>Depressive Disorder, Major - diagnosis</topic><topic>Depressive Disorder, Major - psychology</topic><topic>Depressive personality disorders</topic><topic>Emergency medical care</topic><topic>Emergency services</topic><topic>Emotions</topic><topic>Freedom of speech</topic><topic>Humans</topic><topic>Interviews</topic><topic>Life threatening</topic><topic>Measurement</topic><topic>Medical diagnosis</topic><topic>Medical prognosis</topic><topic>Mental depression</topic><topic>Mental disorders</topic><topic>Natural language</topic><topic>Neural networks</topic><topic>Observation</topic><topic>Original Article</topic><topic>Patients</topic><topic>Physical characteristics</topic><topic>Post traumatic stress disorder</topic><topic>Prosody</topic><topic>Psychopathology</topic><topic>Sensory integration</topic><topic>Speech</topic><topic>Stress Disorders, Post-Traumatic - diagnosis</topic><topic>Stress Disorders, Post-Traumatic - psychology</topic><topic>Trauma</topic><topic>Traumatic life events</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Schultebraucks, Katharina</creatorcontrib><creatorcontrib>Yadav, Vijay</creatorcontrib><creatorcontrib>Shalev, Arieh Y.</creatorcontrib><creatorcontrib>Bonanno, George A.</creatorcontrib><creatorcontrib>Galatzer-Levy, Isaac R.</creatorcontrib><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>ProQuest Social Sciences Premium Collection</collection><collection>ProQuest Central (Corporate)</collection><collection>Applied Social Sciences Index & Abstracts (ASSIA)</collection><collection>Calcium & Calcified Tissue Abstracts</collection><collection>Chemoreception Abstracts</collection><collection>Nursing & Allied Health Database</collection><collection>Neurosciences Abstracts</collection><collection>Health & Medical Collection</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>Medical Database (Alumni Edition)</collection><collection>Psychology Database (Alumni)</collection><collection>Technology Research Database</collection><collection>Hospital Premium Collection</collection><collection>Hospital Premium Collection (Alumni Edition)</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>Research Library (Alumni Edition)</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Social Science Premium Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>Engineering Research Database</collection><collection>Health Research Premium Collection</collection><collection>Health Research Premium Collection (Alumni)</collection><collection>ProQuest Central Student</collection><collection>Research Library Prep</collection><collection>Sociology Collection</collection><collection>ProQuest Health & Medical Complete (Alumni)</collection><collection>Nursing & Allied Health Database (Alumni Edition)</collection><collection>Health & Medical Collection (Alumni Edition)</collection><collection>Medical Database</collection><collection>Psychology Database</collection><collection>Research Library</collection><collection>Sociology Database</collection><collection>Research Library (Corporate)</collection><collection>Nursing & Allied Health Premium</collection><collection>Biotechnology and BioEngineering Abstracts</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>ProQuest One Psychology</collection><collection>ProQuest Central Basic</collection><collection>MEDLINE - Academic</collection><jtitle>Psychological medicine</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Schultebraucks, Katharina</au><au>Yadav, Vijay</au><au>Shalev, Arieh Y.</au><au>Bonanno, George A.</au><au>Galatzer-Levy, Isaac R.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Deep learning-based classification of posttraumatic stress disorder and depression following trauma utilizing visual and auditory markers of arousal and mood</atitle><jtitle>Psychological medicine</jtitle><addtitle>Psychol. Med</addtitle><date>2022-04-01</date><risdate>2022</risdate><volume>52</volume><issue>5</issue><spage>957</spage><epage>967</epage><pages>957-967</pages><issn>0033-2917</issn><eissn>1469-8978</eissn><abstract>Visual and auditory signs of patient functioning have long been used for clinical diagnosis, treatment selection, and prognosis. Direct measurement and quantification of these signals can aim to improve the consistency, sensitivity, and scalability of clinical assessment. Currently, we investigate if machine learning-based computer vision (CV), semantic, and acoustic analysis can capture clinical features from free speech responses to a brief interview 1 month post-trauma that accurately classify major depressive disorder (MDD) and posttraumatic stress disorder (PTSD).
N = 81 patients admitted to an emergency department (ED) of a Level-1 Trauma Unit following a life-threatening traumatic event participated in an open-ended qualitative interview with a para-professional about their experience 1 month following admission. A deep neural network was utilized to extract facial features of emotion and their intensity, movement parameters, speech prosody, and natural language content. These features were utilized as inputs to classify PTSD and MDD cross-sectionally.
Both video- and audio-based markers contributed to good discriminatory classification accuracy. The algorithm discriminates PTSD status at 1 month after ED admission with an AUC of 0.90 (weighted average precision = 0.83, recall = 0.84, and f1-score = 0.83) as well as depression status at 1 month after ED admission with an AUC of 0.86 (weighted average precision = 0.83, recall = 0.82, and f1-score = 0.82).
Direct clinical observation during post-trauma free speech using deep learning identifies digital markers that can be utilized to classify MDD and PTSD status.</abstract><cop>Cambridge, UK</cop><pub>Cambridge University Press</pub><pmid>32744201</pmid><doi>10.1017/S0033291720002718</doi><tpages>11</tpages><orcidid>https://orcid.org/0000-0001-5085-8249</orcidid></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0033-2917 |
ispartof | Psychological medicine, 2022-04, Vol.52 (5), p.957-967 |
issn | 0033-2917 1469-8978 |
language | eng |
recordid | cdi_proquest_miscellaneous_2430094953 |
source | MEDLINE; Applied Social Sciences Index & Abstracts (ASSIA); Cambridge University Press Journals Complete |
subjects | Acoustics Arousal Biomarkers Classification Clinical assessment Clinical skills Computer vision Deep Learning Depression Depressive Disorder, Major - diagnosis Depressive Disorder, Major - psychology Depressive personality disorders Emergency medical care Emergency services Emotions Freedom of speech Humans Interviews Life threatening Measurement Medical diagnosis Medical prognosis Mental depression Mental disorders Natural language Neural networks Observation Original Article Patients Physical characteristics Post traumatic stress disorder Prosody Psychopathology Sensory integration Speech Stress Disorders, Post-Traumatic - diagnosis Stress Disorders, Post-Traumatic - psychology Trauma Traumatic life events |
title | Deep learning-based classification of posttraumatic stress disorder and depression following trauma utilizing visual and auditory markers of arousal and mood |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-06T14%3A35%3A10IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Deep%20learning-based%20classification%20of%20posttraumatic%20stress%20disorder%20and%20depression%20following%20trauma%20utilizing%20visual%20and%20auditory%20markers%20of%20arousal%20and%20mood&rft.jtitle=Psychological%20medicine&rft.au=Schultebraucks,%20Katharina&rft.date=2022-04-01&rft.volume=52&rft.issue=5&rft.spage=957&rft.epage=967&rft.pages=957-967&rft.issn=0033-2917&rft.eissn=1469-8978&rft_id=info:doi/10.1017/S0033291720002718&rft_dat=%3Cproquest_cross%3E2647726108%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2647726108&rft_id=info:pmid/32744201&rft_cupid=10_1017_S0033291720002718&rfr_iscdi=true |