Prospective assessment of breast cancer risk from multimodal multiview ultrasound images via clinically applicable deep learning

The clinical application of breast ultrasound for the assessment of cancer risk and of deep learning for the classification of breast-ultrasound images has been hindered by inter-grader variability and high false positive rates and by deep-learning models that do not follow Breast Imaging Reporting...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Nature biomedical engineering 2021-06, Vol.5 (6), p.522-532
Hauptverfasser: Qian, Xuejun, Pei, Jing, Zheng, Hui, Xie, Xinxin, Yan, Lin, Zhang, Hao, Han, Chunguang, Gao, Xiang, Zhang, Hanqi, Zheng, Weiwei, Sun, Qiang, Lu, Lu, Shung, K. Kirk
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 532
container_issue 6
container_start_page 522
container_title Nature biomedical engineering
container_volume 5
creator Qian, Xuejun
Pei, Jing
Zheng, Hui
Xie, Xinxin
Yan, Lin
Zhang, Hao
Han, Chunguang
Gao, Xiang
Zhang, Hanqi
Zheng, Weiwei
Sun, Qiang
Lu, Lu
Shung, K. Kirk
description The clinical application of breast ultrasound for the assessment of cancer risk and of deep learning for the classification of breast-ultrasound images has been hindered by inter-grader variability and high false positive rates and by deep-learning models that do not follow Breast Imaging Reporting and Data System (BI-RADS) standards, lack explainability features and have not been tested prospectively. Here, we show that an explainable deep-learning system trained on 10,815 multimodal breast-ultrasound images of 721 biopsy-confirmed lesions from 634 patients across two hospitals and prospectively tested on 912 additional images of 152 lesions from 141 patients predicts BI-RADS scores for breast cancer as accurately as experienced radiologists, with areas under the receiver operating curve of 0.922 (95% confidence interval (CI) = 0.868–0.959) for bimodal images and 0.955 (95% CI = 0.909–0.982) for multimodal images. Multimodal multiview breast-ultrasound images augmented with heatmaps for malignancy risk predicted via deep learning may facilitate the adoption of ultrasound imaging in screening mammography workflows. An explainable deep-learning system prospectively predicts clinical scores for breast cancer risk from multimodal breast-ultrasound images as accurately as experienced radiologists.
doi_str_mv 10.1038/s41551-021-00711-2
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_2515689434</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2542128118</sourcerecordid><originalsourceid>FETCH-LOGICAL-c424t-99be9afa33a31fa0206984efae88e7d548ead4bef4bff8fa63396a693c9067bf3</originalsourceid><addsrcrecordid>eNp9kctu1jAQhS0EolXpC7BAltiwCfiWxF6iikKlSrAAiZ01cca_XBwn2MlfdcejY0i5iAULy0fyN2c8cwh5ytlLzqR-VRRvW94wUQ_rOW_EA3IqeNs3WnWfH_6lT8h5KTeMMW6kMn37mJxIqftWK3ZKvn3Ic1nQreGIFErBUiZMK509HTJCWamD5DDTHMoX6vM80WmLa5jmEeIujwFvaRUZyrylkYYJDljoMQB1MaTgIMY7CssSqxwi0hFxoREhp5AOT8gjD7Hg-f19Rj5dvvl48a65fv_26uL1deOUUGtjzIAGPEgJkntggnVGK_SAWmM_tkojjGpArwbvtYdOStNBZ6QzrOsHL8_Ii913yfPXDctqp1AcxggJ561Y0fK200ZJVdHn_6A385ZT_V2llOBCc64rJXbK1Q2WjN4uuY6e7yxn9kdEdo_I1ojsz4isqEXP7q23YcLxd8mvQCogd6DUp3TA_Kf3f2y_A_Z_n3g</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2542128118</pqid></control><display><type>article</type><title>Prospective assessment of breast cancer risk from multimodal multiview ultrasound images via clinically applicable deep learning</title><source>MEDLINE</source><source>SpringerLink Journals - AutoHoldings</source><creator>Qian, Xuejun ; Pei, Jing ; Zheng, Hui ; Xie, Xinxin ; Yan, Lin ; Zhang, Hao ; Han, Chunguang ; Gao, Xiang ; Zhang, Hanqi ; Zheng, Weiwei ; Sun, Qiang ; Lu, Lu ; Shung, K. Kirk</creator><creatorcontrib>Qian, Xuejun ; Pei, Jing ; Zheng, Hui ; Xie, Xinxin ; Yan, Lin ; Zhang, Hao ; Han, Chunguang ; Gao, Xiang ; Zhang, Hanqi ; Zheng, Weiwei ; Sun, Qiang ; Lu, Lu ; Shung, K. Kirk</creatorcontrib><description>The clinical application of breast ultrasound for the assessment of cancer risk and of deep learning for the classification of breast-ultrasound images has been hindered by inter-grader variability and high false positive rates and by deep-learning models that do not follow Breast Imaging Reporting and Data System (BI-RADS) standards, lack explainability features and have not been tested prospectively. Here, we show that an explainable deep-learning system trained on 10,815 multimodal breast-ultrasound images of 721 biopsy-confirmed lesions from 634 patients across two hospitals and prospectively tested on 912 additional images of 152 lesions from 141 patients predicts BI-RADS scores for breast cancer as accurately as experienced radiologists, with areas under the receiver operating curve of 0.922 (95% confidence interval (CI) = 0.868–0.959) for bimodal images and 0.955 (95% CI = 0.909–0.982) for multimodal images. Multimodal multiview breast-ultrasound images augmented with heatmaps for malignancy risk predicted via deep learning may facilitate the adoption of ultrasound imaging in screening mammography workflows. An explainable deep-learning system prospectively predicts clinical scores for breast cancer risk from multimodal breast-ultrasound images as accurately as experienced radiologists.</description><identifier>ISSN: 2157-846X</identifier><identifier>EISSN: 2157-846X</identifier><identifier>DOI: 10.1038/s41551-021-00711-2</identifier><identifier>PMID: 33875840</identifier><language>eng</language><publisher>London: Nature Publishing Group UK</publisher><subject>631/114/1305 ; 639/166/985 ; 692/699/67/1347 ; 692/700/1421 ; Adult ; Biomedical and Life Sciences ; Biomedical Engineering/Biotechnology ; Biomedicine ; Biopsy ; Breast cancer ; Breast Neoplasms - diagnostic imaging ; Breast Neoplasms - pathology ; Confidence intervals ; Datasets as Topic ; Deep Learning ; False Positive Reactions ; Female ; Humans ; Image classification ; Image Interpretation, Computer-Assisted - statistics &amp; numerical data ; Lesions ; Machine learning ; Malignancy ; Mammography ; Mammography - methods ; Mammography - standards ; Medical imaging ; Middle Aged ; Observer Variation ; Patients ; Predictive Value of Tests ; Prospective Studies ; Risk Assessment ; Ultrasonic imaging ; Ultrasonography - methods ; Ultrasonography - standards ; Ultrasound</subject><ispartof>Nature biomedical engineering, 2021-06, Vol.5 (6), p.522-532</ispartof><rights>The Author(s), under exclusive licence to Springer Nature Limited 2021</rights><rights>The Author(s), under exclusive licence to Springer Nature Limited 2021.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c424t-99be9afa33a31fa0206984efae88e7d548ead4bef4bff8fa63396a693c9067bf3</citedby><cites>FETCH-LOGICAL-c424t-99be9afa33a31fa0206984efae88e7d548ead4bef4bff8fa63396a693c9067bf3</cites><orcidid>0000-0003-3634-8757 ; 0000-0001-5507-8989</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1038/s41551-021-00711-2$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1038/s41551-021-00711-2$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>314,776,780,27901,27902,41464,42533,51294</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/33875840$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Qian, Xuejun</creatorcontrib><creatorcontrib>Pei, Jing</creatorcontrib><creatorcontrib>Zheng, Hui</creatorcontrib><creatorcontrib>Xie, Xinxin</creatorcontrib><creatorcontrib>Yan, Lin</creatorcontrib><creatorcontrib>Zhang, Hao</creatorcontrib><creatorcontrib>Han, Chunguang</creatorcontrib><creatorcontrib>Gao, Xiang</creatorcontrib><creatorcontrib>Zhang, Hanqi</creatorcontrib><creatorcontrib>Zheng, Weiwei</creatorcontrib><creatorcontrib>Sun, Qiang</creatorcontrib><creatorcontrib>Lu, Lu</creatorcontrib><creatorcontrib>Shung, K. Kirk</creatorcontrib><title>Prospective assessment of breast cancer risk from multimodal multiview ultrasound images via clinically applicable deep learning</title><title>Nature biomedical engineering</title><addtitle>Nat Biomed Eng</addtitle><addtitle>Nat Biomed Eng</addtitle><description>The clinical application of breast ultrasound for the assessment of cancer risk and of deep learning for the classification of breast-ultrasound images has been hindered by inter-grader variability and high false positive rates and by deep-learning models that do not follow Breast Imaging Reporting and Data System (BI-RADS) standards, lack explainability features and have not been tested prospectively. Here, we show that an explainable deep-learning system trained on 10,815 multimodal breast-ultrasound images of 721 biopsy-confirmed lesions from 634 patients across two hospitals and prospectively tested on 912 additional images of 152 lesions from 141 patients predicts BI-RADS scores for breast cancer as accurately as experienced radiologists, with areas under the receiver operating curve of 0.922 (95% confidence interval (CI) = 0.868–0.959) for bimodal images and 0.955 (95% CI = 0.909–0.982) for multimodal images. Multimodal multiview breast-ultrasound images augmented with heatmaps for malignancy risk predicted via deep learning may facilitate the adoption of ultrasound imaging in screening mammography workflows. An explainable deep-learning system prospectively predicts clinical scores for breast cancer risk from multimodal breast-ultrasound images as accurately as experienced radiologists.</description><subject>631/114/1305</subject><subject>639/166/985</subject><subject>692/699/67/1347</subject><subject>692/700/1421</subject><subject>Adult</subject><subject>Biomedical and Life Sciences</subject><subject>Biomedical Engineering/Biotechnology</subject><subject>Biomedicine</subject><subject>Biopsy</subject><subject>Breast cancer</subject><subject>Breast Neoplasms - diagnostic imaging</subject><subject>Breast Neoplasms - pathology</subject><subject>Confidence intervals</subject><subject>Datasets as Topic</subject><subject>Deep Learning</subject><subject>False Positive Reactions</subject><subject>Female</subject><subject>Humans</subject><subject>Image classification</subject><subject>Image Interpretation, Computer-Assisted - statistics &amp; numerical data</subject><subject>Lesions</subject><subject>Machine learning</subject><subject>Malignancy</subject><subject>Mammography</subject><subject>Mammography - methods</subject><subject>Mammography - standards</subject><subject>Medical imaging</subject><subject>Middle Aged</subject><subject>Observer Variation</subject><subject>Patients</subject><subject>Predictive Value of Tests</subject><subject>Prospective Studies</subject><subject>Risk Assessment</subject><subject>Ultrasonic imaging</subject><subject>Ultrasonography - methods</subject><subject>Ultrasonography - standards</subject><subject>Ultrasound</subject><issn>2157-846X</issn><issn>2157-846X</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>EIF</sourceid><sourceid>BENPR</sourceid><recordid>eNp9kctu1jAQhS0EolXpC7BAltiwCfiWxF6iikKlSrAAiZ01cca_XBwn2MlfdcejY0i5iAULy0fyN2c8cwh5ytlLzqR-VRRvW94wUQ_rOW_EA3IqeNs3WnWfH_6lT8h5KTeMMW6kMn37mJxIqftWK3ZKvn3Ic1nQreGIFErBUiZMK509HTJCWamD5DDTHMoX6vM80WmLa5jmEeIujwFvaRUZyrylkYYJDljoMQB1MaTgIMY7CssSqxwi0hFxoREhp5AOT8gjD7Hg-f19Rj5dvvl48a65fv_26uL1deOUUGtjzIAGPEgJkntggnVGK_SAWmM_tkojjGpArwbvtYdOStNBZ6QzrOsHL8_Ii913yfPXDctqp1AcxggJ561Y0fK200ZJVdHn_6A385ZT_V2llOBCc64rJXbK1Q2WjN4uuY6e7yxn9kdEdo_I1ojsz4isqEXP7q23YcLxd8mvQCogd6DUp3TA_Kf3f2y_A_Z_n3g</recordid><startdate>20210601</startdate><enddate>20210601</enddate><creator>Qian, Xuejun</creator><creator>Pei, Jing</creator><creator>Zheng, Hui</creator><creator>Xie, Xinxin</creator><creator>Yan, Lin</creator><creator>Zhang, Hao</creator><creator>Han, Chunguang</creator><creator>Gao, Xiang</creator><creator>Zhang, Hanqi</creator><creator>Zheng, Weiwei</creator><creator>Sun, Qiang</creator><creator>Lu, Lu</creator><creator>Shung, K. Kirk</creator><general>Nature Publishing Group UK</general><general>Nature Publishing Group</general><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>8FE</scope><scope>8FG</scope><scope>8FH</scope><scope>ABJCF</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BBNVY</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>BHPHI</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>LK8</scope><scope>M7P</scope><scope>M7S</scope><scope>P5Z</scope><scope>P62</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0003-3634-8757</orcidid><orcidid>https://orcid.org/0000-0001-5507-8989</orcidid></search><sort><creationdate>20210601</creationdate><title>Prospective assessment of breast cancer risk from multimodal multiview ultrasound images via clinically applicable deep learning</title><author>Qian, Xuejun ; Pei, Jing ; Zheng, Hui ; Xie, Xinxin ; Yan, Lin ; Zhang, Hao ; Han, Chunguang ; Gao, Xiang ; Zhang, Hanqi ; Zheng, Weiwei ; Sun, Qiang ; Lu, Lu ; Shung, K. Kirk</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c424t-99be9afa33a31fa0206984efae88e7d548ead4bef4bff8fa63396a693c9067bf3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>631/114/1305</topic><topic>639/166/985</topic><topic>692/699/67/1347</topic><topic>692/700/1421</topic><topic>Adult</topic><topic>Biomedical and Life Sciences</topic><topic>Biomedical Engineering/Biotechnology</topic><topic>Biomedicine</topic><topic>Biopsy</topic><topic>Breast cancer</topic><topic>Breast Neoplasms - diagnostic imaging</topic><topic>Breast Neoplasms - pathology</topic><topic>Confidence intervals</topic><topic>Datasets as Topic</topic><topic>Deep Learning</topic><topic>False Positive Reactions</topic><topic>Female</topic><topic>Humans</topic><topic>Image classification</topic><topic>Image Interpretation, Computer-Assisted - statistics &amp; numerical data</topic><topic>Lesions</topic><topic>Machine learning</topic><topic>Malignancy</topic><topic>Mammography</topic><topic>Mammography - methods</topic><topic>Mammography - standards</topic><topic>Medical imaging</topic><topic>Middle Aged</topic><topic>Observer Variation</topic><topic>Patients</topic><topic>Predictive Value of Tests</topic><topic>Prospective Studies</topic><topic>Risk Assessment</topic><topic>Ultrasonic imaging</topic><topic>Ultrasonography - methods</topic><topic>Ultrasonography - standards</topic><topic>Ultrasound</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Qian, Xuejun</creatorcontrib><creatorcontrib>Pei, Jing</creatorcontrib><creatorcontrib>Zheng, Hui</creatorcontrib><creatorcontrib>Xie, Xinxin</creatorcontrib><creatorcontrib>Yan, Lin</creatorcontrib><creatorcontrib>Zhang, Hao</creatorcontrib><creatorcontrib>Han, Chunguang</creatorcontrib><creatorcontrib>Gao, Xiang</creatorcontrib><creatorcontrib>Zhang, Hanqi</creatorcontrib><creatorcontrib>Zheng, Weiwei</creatorcontrib><creatorcontrib>Sun, Qiang</creatorcontrib><creatorcontrib>Lu, Lu</creatorcontrib><creatorcontrib>Shung, K. Kirk</creatorcontrib><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Natural Science Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>Biological Science Collection</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>Natural Science Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>ProQuest Biological Science Collection</collection><collection>Biological Science Database</collection><collection>Engineering Database</collection><collection>Advanced Technologies &amp; Aerospace Database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><collection>MEDLINE - Academic</collection><jtitle>Nature biomedical engineering</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Qian, Xuejun</au><au>Pei, Jing</au><au>Zheng, Hui</au><au>Xie, Xinxin</au><au>Yan, Lin</au><au>Zhang, Hao</au><au>Han, Chunguang</au><au>Gao, Xiang</au><au>Zhang, Hanqi</au><au>Zheng, Weiwei</au><au>Sun, Qiang</au><au>Lu, Lu</au><au>Shung, K. Kirk</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Prospective assessment of breast cancer risk from multimodal multiview ultrasound images via clinically applicable deep learning</atitle><jtitle>Nature biomedical engineering</jtitle><stitle>Nat Biomed Eng</stitle><addtitle>Nat Biomed Eng</addtitle><date>2021-06-01</date><risdate>2021</risdate><volume>5</volume><issue>6</issue><spage>522</spage><epage>532</epage><pages>522-532</pages><issn>2157-846X</issn><eissn>2157-846X</eissn><abstract>The clinical application of breast ultrasound for the assessment of cancer risk and of deep learning for the classification of breast-ultrasound images has been hindered by inter-grader variability and high false positive rates and by deep-learning models that do not follow Breast Imaging Reporting and Data System (BI-RADS) standards, lack explainability features and have not been tested prospectively. Here, we show that an explainable deep-learning system trained on 10,815 multimodal breast-ultrasound images of 721 biopsy-confirmed lesions from 634 patients across two hospitals and prospectively tested on 912 additional images of 152 lesions from 141 patients predicts BI-RADS scores for breast cancer as accurately as experienced radiologists, with areas under the receiver operating curve of 0.922 (95% confidence interval (CI) = 0.868–0.959) for bimodal images and 0.955 (95% CI = 0.909–0.982) for multimodal images. Multimodal multiview breast-ultrasound images augmented with heatmaps for malignancy risk predicted via deep learning may facilitate the adoption of ultrasound imaging in screening mammography workflows. An explainable deep-learning system prospectively predicts clinical scores for breast cancer risk from multimodal breast-ultrasound images as accurately as experienced radiologists.</abstract><cop>London</cop><pub>Nature Publishing Group UK</pub><pmid>33875840</pmid><doi>10.1038/s41551-021-00711-2</doi><tpages>11</tpages><orcidid>https://orcid.org/0000-0003-3634-8757</orcidid><orcidid>https://orcid.org/0000-0001-5507-8989</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 2157-846X
ispartof Nature biomedical engineering, 2021-06, Vol.5 (6), p.522-532
issn 2157-846X
2157-846X
language eng
recordid cdi_proquest_miscellaneous_2515689434
source MEDLINE; SpringerLink Journals - AutoHoldings
subjects 631/114/1305
639/166/985
692/699/67/1347
692/700/1421
Adult
Biomedical and Life Sciences
Biomedical Engineering/Biotechnology
Biomedicine
Biopsy
Breast cancer
Breast Neoplasms - diagnostic imaging
Breast Neoplasms - pathology
Confidence intervals
Datasets as Topic
Deep Learning
False Positive Reactions
Female
Humans
Image classification
Image Interpretation, Computer-Assisted - statistics & numerical data
Lesions
Machine learning
Malignancy
Mammography
Mammography - methods
Mammography - standards
Medical imaging
Middle Aged
Observer Variation
Patients
Predictive Value of Tests
Prospective Studies
Risk Assessment
Ultrasonic imaging
Ultrasonography - methods
Ultrasonography - standards
Ultrasound
title Prospective assessment of breast cancer risk from multimodal multiview ultrasound images via clinically applicable deep learning
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-08T20%3A10%3A41IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Prospective%20assessment%20of%20breast%20cancer%20risk%20from%20multimodal%20multiview%20ultrasound%20images%20via%20clinically%20applicable%20deep%20learning&rft.jtitle=Nature%20biomedical%20engineering&rft.au=Qian,%20Xuejun&rft.date=2021-06-01&rft.volume=5&rft.issue=6&rft.spage=522&rft.epage=532&rft.pages=522-532&rft.issn=2157-846X&rft.eissn=2157-846X&rft_id=info:doi/10.1038/s41551-021-00711-2&rft_dat=%3Cproquest_cross%3E2542128118%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2542128118&rft_id=info:pmid/33875840&rfr_iscdi=true