A comparison of methods for fully automatic segmentation of tumors and involved nodes in PET/CT of head and neck cancers

Target volume delineation is a vital but time-consuming and challenging part of radiotherapy, where the goal is to deliver sufficient dose to the target while reducing risks of side effects. For head and neck cancer (HNC) this is complicated by the complex anatomy of the head and neck region and the...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Physics in medicine & biology 2021-03, Vol.66 (6), p.065012-065012
Hauptverfasser: Groendahl, Aurora Rosvoll, Skjei Knudtsen, Ingerid, Huynh, Bao Ngoc, Mulstad, Martine, Moe, Yngve Mardal, Knuth, Franziska, Tomic, Oliver, Indahl, Ulf Geir, Torheim, Turid, Dale, Einar, Malinen, Eirik, Futsaether, Cecilia Marie
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 065012
container_issue 6
container_start_page 065012
container_title Physics in medicine & biology
container_volume 66
creator Groendahl, Aurora Rosvoll
Skjei Knudtsen, Ingerid
Huynh, Bao Ngoc
Mulstad, Martine
Moe, Yngve Mardal
Knuth, Franziska
Tomic, Oliver
Indahl, Ulf Geir
Torheim, Turid
Dale, Einar
Malinen, Eirik
Futsaether, Cecilia Marie
description Target volume delineation is a vital but time-consuming and challenging part of radiotherapy, where the goal is to deliver sufficient dose to the target while reducing risks of side effects. For head and neck cancer (HNC) this is complicated by the complex anatomy of the head and neck region and the proximity of target volumes to organs at risk. The purpose of this study was to compare and evaluate conventional PET thresholding methods, six classical machine learning algorithms and a 2D U-Net convolutional neural network (CNN) for automatic gross tumor volume (GTV) segmentation of HNC in PET/CT images. For the latter two approaches the impact of single versus multimodality input on segmentation quality was also assessed. 197 patients were included in the study. The cohort was split into training and test sets (157 and 40 patients, respectively). Five-fold cross-validation was used on the training set for model comparison and selection. Manual GTV delineations represented the ground truth. Tresholding, classical machine learning and CNN segmentation models were ranked separately according to the cross-validation Sørensen-Dice similarity coefficient (Dice). PET thresholding gave a maximum mean Dice of 0.62, whereas classical machine learning resulted in maximum mean Dice scores of 0.24 (CT) and 0.66 (PET; PET/CT). CNN models obtained maximum mean Dice scores of 0.66 (CT), 0.68 (PET) and 0.74 (PET/CT). The difference in cross-validation Dice between multimodality PET/CT and single modality CNN models was significant (p ≤ 0.0001). The top-ranked PET/CT-based CNN model outperformed the best-performing thresholding and classical machine learning models, giving significantly better segmentations in terms of cross-validation and test set Dice, true positive rate, positive predictive value and surface distance-based metrics (p ≤ 0.0001). Thus, deep learning based on multimodality PET/CT input resulted in superior target coverage and less inclusion of surrounding normal tissue.
doi_str_mv 10.1088/1361-6560/abe553
format Article
fullrecord <record><control><sourceid>proquest_pubme</sourceid><recordid>TN_cdi_pubmed_primary_33666176</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2498484632</sourcerecordid><originalsourceid>FETCH-LOGICAL-c368t-f93882ba0f3d04a204831241361dde1411d68f4a18804827d0d0aa8c67ba806a3</originalsourceid><addsrcrecordid>eNp1kMtv1DAQxi0Eokvhzgn5BgfSHceO13usVuUhVSqH7dma-EFTYjvYSdX-9yRN6QlO8_rNN5qPkPcMzhgotWVcsko2ErbYuqbhL8jmufWSbAA4q_asaU7Im1JuARhTtXhNTjiXUrKd3JD7c2pSGDB3JUWaPA1uvEm2UJ8y9VPfP1CcxhRw7Awt7mdwcZzzlR2nkHKhGC3t4l3q75ylMVlX5pL-uDhuD8cFu3FoH6HozC9qMBqXy1vyymNf3LuneEquv1wcD9-qy6uv3w_nl5XhUo2V33Ol6hbBcwsCaxCKs1osX1rrmGDMSuUFMqXmUb2zYAFRGblrUYFEfko-rbpDTr8nV0YdumJc32N0aSq6FnsllJC8nlFYUZNTKdl5PeQuYH7QDPTit17O6sVcvfo9r3x4Up_a4Ozzwl-DZ-DjCnRp0LdpynF-Vg-h1VJqqUE2wGo9WD-Tn_9B_vfyH8DRlis</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2498484632</pqid></control><display><type>article</type><title>A comparison of methods for fully automatic segmentation of tumors and involved nodes in PET/CT of head and neck cancers</title><source>MEDLINE</source><source>IOP Publishing Journals</source><source>Institute of Physics (IOP) Journals - HEAL-Link</source><creator>Groendahl, Aurora Rosvoll ; Skjei Knudtsen, Ingerid ; Huynh, Bao Ngoc ; Mulstad, Martine ; Moe, Yngve Mardal ; Knuth, Franziska ; Tomic, Oliver ; Indahl, Ulf Geir ; Torheim, Turid ; Dale, Einar ; Malinen, Eirik ; Futsaether, Cecilia Marie</creator><creatorcontrib>Groendahl, Aurora Rosvoll ; Skjei Knudtsen, Ingerid ; Huynh, Bao Ngoc ; Mulstad, Martine ; Moe, Yngve Mardal ; Knuth, Franziska ; Tomic, Oliver ; Indahl, Ulf Geir ; Torheim, Turid ; Dale, Einar ; Malinen, Eirik ; Futsaether, Cecilia Marie</creatorcontrib><description>Target volume delineation is a vital but time-consuming and challenging part of radiotherapy, where the goal is to deliver sufficient dose to the target while reducing risks of side effects. For head and neck cancer (HNC) this is complicated by the complex anatomy of the head and neck region and the proximity of target volumes to organs at risk. The purpose of this study was to compare and evaluate conventional PET thresholding methods, six classical machine learning algorithms and a 2D U-Net convolutional neural network (CNN) for automatic gross tumor volume (GTV) segmentation of HNC in PET/CT images. For the latter two approaches the impact of single versus multimodality input on segmentation quality was also assessed. 197 patients were included in the study. The cohort was split into training and test sets (157 and 40 patients, respectively). Five-fold cross-validation was used on the training set for model comparison and selection. Manual GTV delineations represented the ground truth. Tresholding, classical machine learning and CNN segmentation models were ranked separately according to the cross-validation Sørensen-Dice similarity coefficient (Dice). PET thresholding gave a maximum mean Dice of 0.62, whereas classical machine learning resulted in maximum mean Dice scores of 0.24 (CT) and 0.66 (PET; PET/CT). CNN models obtained maximum mean Dice scores of 0.66 (CT), 0.68 (PET) and 0.74 (PET/CT). The difference in cross-validation Dice between multimodality PET/CT and single modality CNN models was significant (p ≤ 0.0001). The top-ranked PET/CT-based CNN model outperformed the best-performing thresholding and classical machine learning models, giving significantly better segmentations in terms of cross-validation and test set Dice, true positive rate, positive predictive value and surface distance-based metrics (p ≤ 0.0001). Thus, deep learning based on multimodality PET/CT input resulted in superior target coverage and less inclusion of surrounding normal tissue.</description><identifier>ISSN: 0031-9155</identifier><identifier>EISSN: 1361-6560</identifier><identifier>DOI: 10.1088/1361-6560/abe553</identifier><identifier>PMID: 33666176</identifier><identifier>CODEN: PHMBA7</identifier><language>eng</language><publisher>England: IOP Publishing</publisher><subject>automatic segmentation ; deep learning ; gross tumor volume ; head and neck cancer ; Head and Neck Neoplasms - diagnostic imaging ; Humans ; Image Processing, Computer-Assisted - methods ; machine learning ; Neural Networks, Computer ; PET/CT ; Positron Emission Tomography Computed Tomography - methods ; thresholding</subject><ispartof>Physics in medicine &amp; biology, 2021-03, Vol.66 (6), p.065012-065012</ispartof><rights>2021 Institute of Physics and Engineering in Medicine</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c368t-f93882ba0f3d04a204831241361dde1411d68f4a18804827d0d0aa8c67ba806a3</citedby><cites>FETCH-LOGICAL-c368t-f93882ba0f3d04a204831241361dde1411d68f4a18804827d0d0aa8c67ba806a3</cites><orcidid>0000-0002-5159-9012 ; 0000-0001-6191-2036 ; 0000-0001-7944-0719 ; 0000-0002-4582-253X ; 0000-0002-6998-8681 ; 0000-0001-9483-2788 ; 0000-0002-1308-9871 ; 0000-0003-1595-9962 ; 0000-0001-9313-2878 ; 0000-0003-1327-3844 ; 0000-0001-5210-132X ; 0000-0002-3236-463X</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://iopscience.iop.org/article/10.1088/1361-6560/abe553/pdf$$EPDF$$P50$$Giop$$H</linktopdf><link.rule.ids>314,776,780,27901,27902,53821,53868</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/33666176$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Groendahl, Aurora Rosvoll</creatorcontrib><creatorcontrib>Skjei Knudtsen, Ingerid</creatorcontrib><creatorcontrib>Huynh, Bao Ngoc</creatorcontrib><creatorcontrib>Mulstad, Martine</creatorcontrib><creatorcontrib>Moe, Yngve Mardal</creatorcontrib><creatorcontrib>Knuth, Franziska</creatorcontrib><creatorcontrib>Tomic, Oliver</creatorcontrib><creatorcontrib>Indahl, Ulf Geir</creatorcontrib><creatorcontrib>Torheim, Turid</creatorcontrib><creatorcontrib>Dale, Einar</creatorcontrib><creatorcontrib>Malinen, Eirik</creatorcontrib><creatorcontrib>Futsaether, Cecilia Marie</creatorcontrib><title>A comparison of methods for fully automatic segmentation of tumors and involved nodes in PET/CT of head and neck cancers</title><title>Physics in medicine &amp; biology</title><addtitle>PMB</addtitle><addtitle>Phys. Med. Biol</addtitle><description>Target volume delineation is a vital but time-consuming and challenging part of radiotherapy, where the goal is to deliver sufficient dose to the target while reducing risks of side effects. For head and neck cancer (HNC) this is complicated by the complex anatomy of the head and neck region and the proximity of target volumes to organs at risk. The purpose of this study was to compare and evaluate conventional PET thresholding methods, six classical machine learning algorithms and a 2D U-Net convolutional neural network (CNN) for automatic gross tumor volume (GTV) segmentation of HNC in PET/CT images. For the latter two approaches the impact of single versus multimodality input on segmentation quality was also assessed. 197 patients were included in the study. The cohort was split into training and test sets (157 and 40 patients, respectively). Five-fold cross-validation was used on the training set for model comparison and selection. Manual GTV delineations represented the ground truth. Tresholding, classical machine learning and CNN segmentation models were ranked separately according to the cross-validation Sørensen-Dice similarity coefficient (Dice). PET thresholding gave a maximum mean Dice of 0.62, whereas classical machine learning resulted in maximum mean Dice scores of 0.24 (CT) and 0.66 (PET; PET/CT). CNN models obtained maximum mean Dice scores of 0.66 (CT), 0.68 (PET) and 0.74 (PET/CT). The difference in cross-validation Dice between multimodality PET/CT and single modality CNN models was significant (p ≤ 0.0001). The top-ranked PET/CT-based CNN model outperformed the best-performing thresholding and classical machine learning models, giving significantly better segmentations in terms of cross-validation and test set Dice, true positive rate, positive predictive value and surface distance-based metrics (p ≤ 0.0001). Thus, deep learning based on multimodality PET/CT input resulted in superior target coverage and less inclusion of surrounding normal tissue.</description><subject>automatic segmentation</subject><subject>deep learning</subject><subject>gross tumor volume</subject><subject>head and neck cancer</subject><subject>Head and Neck Neoplasms - diagnostic imaging</subject><subject>Humans</subject><subject>Image Processing, Computer-Assisted - methods</subject><subject>machine learning</subject><subject>Neural Networks, Computer</subject><subject>PET/CT</subject><subject>Positron Emission Tomography Computed Tomography - methods</subject><subject>thresholding</subject><issn>0031-9155</issn><issn>1361-6560</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>EIF</sourceid><recordid>eNp1kMtv1DAQxi0Eokvhzgn5BgfSHceO13usVuUhVSqH7dma-EFTYjvYSdX-9yRN6QlO8_rNN5qPkPcMzhgotWVcsko2ErbYuqbhL8jmufWSbAA4q_asaU7Im1JuARhTtXhNTjiXUrKd3JD7c2pSGDB3JUWaPA1uvEm2UJ8y9VPfP1CcxhRw7Awt7mdwcZzzlR2nkHKhGC3t4l3q75ylMVlX5pL-uDhuD8cFu3FoH6HozC9qMBqXy1vyymNf3LuneEquv1wcD9-qy6uv3w_nl5XhUo2V33Ol6hbBcwsCaxCKs1osX1rrmGDMSuUFMqXmUb2zYAFRGblrUYFEfko-rbpDTr8nV0YdumJc32N0aSq6FnsllJC8nlFYUZNTKdl5PeQuYH7QDPTit17O6sVcvfo9r3x4Up_a4Ozzwl-DZ-DjCnRp0LdpynF-Vg-h1VJqqUE2wGo9WD-Tn_9B_vfyH8DRlis</recordid><startdate>20210321</startdate><enddate>20210321</enddate><creator>Groendahl, Aurora Rosvoll</creator><creator>Skjei Knudtsen, Ingerid</creator><creator>Huynh, Bao Ngoc</creator><creator>Mulstad, Martine</creator><creator>Moe, Yngve Mardal</creator><creator>Knuth, Franziska</creator><creator>Tomic, Oliver</creator><creator>Indahl, Ulf Geir</creator><creator>Torheim, Turid</creator><creator>Dale, Einar</creator><creator>Malinen, Eirik</creator><creator>Futsaether, Cecilia Marie</creator><general>IOP Publishing</general><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0002-5159-9012</orcidid><orcidid>https://orcid.org/0000-0001-6191-2036</orcidid><orcidid>https://orcid.org/0000-0001-7944-0719</orcidid><orcidid>https://orcid.org/0000-0002-4582-253X</orcidid><orcidid>https://orcid.org/0000-0002-6998-8681</orcidid><orcidid>https://orcid.org/0000-0001-9483-2788</orcidid><orcidid>https://orcid.org/0000-0002-1308-9871</orcidid><orcidid>https://orcid.org/0000-0003-1595-9962</orcidid><orcidid>https://orcid.org/0000-0001-9313-2878</orcidid><orcidid>https://orcid.org/0000-0003-1327-3844</orcidid><orcidid>https://orcid.org/0000-0001-5210-132X</orcidid><orcidid>https://orcid.org/0000-0002-3236-463X</orcidid></search><sort><creationdate>20210321</creationdate><title>A comparison of methods for fully automatic segmentation of tumors and involved nodes in PET/CT of head and neck cancers</title><author>Groendahl, Aurora Rosvoll ; Skjei Knudtsen, Ingerid ; Huynh, Bao Ngoc ; Mulstad, Martine ; Moe, Yngve Mardal ; Knuth, Franziska ; Tomic, Oliver ; Indahl, Ulf Geir ; Torheim, Turid ; Dale, Einar ; Malinen, Eirik ; Futsaether, Cecilia Marie</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c368t-f93882ba0f3d04a204831241361dde1411d68f4a18804827d0d0aa8c67ba806a3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>automatic segmentation</topic><topic>deep learning</topic><topic>gross tumor volume</topic><topic>head and neck cancer</topic><topic>Head and Neck Neoplasms - diagnostic imaging</topic><topic>Humans</topic><topic>Image Processing, Computer-Assisted - methods</topic><topic>machine learning</topic><topic>Neural Networks, Computer</topic><topic>PET/CT</topic><topic>Positron Emission Tomography Computed Tomography - methods</topic><topic>thresholding</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Groendahl, Aurora Rosvoll</creatorcontrib><creatorcontrib>Skjei Knudtsen, Ingerid</creatorcontrib><creatorcontrib>Huynh, Bao Ngoc</creatorcontrib><creatorcontrib>Mulstad, Martine</creatorcontrib><creatorcontrib>Moe, Yngve Mardal</creatorcontrib><creatorcontrib>Knuth, Franziska</creatorcontrib><creatorcontrib>Tomic, Oliver</creatorcontrib><creatorcontrib>Indahl, Ulf Geir</creatorcontrib><creatorcontrib>Torheim, Turid</creatorcontrib><creatorcontrib>Dale, Einar</creatorcontrib><creatorcontrib>Malinen, Eirik</creatorcontrib><creatorcontrib>Futsaether, Cecilia Marie</creatorcontrib><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>MEDLINE - Academic</collection><jtitle>Physics in medicine &amp; biology</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Groendahl, Aurora Rosvoll</au><au>Skjei Knudtsen, Ingerid</au><au>Huynh, Bao Ngoc</au><au>Mulstad, Martine</au><au>Moe, Yngve Mardal</au><au>Knuth, Franziska</au><au>Tomic, Oliver</au><au>Indahl, Ulf Geir</au><au>Torheim, Turid</au><au>Dale, Einar</au><au>Malinen, Eirik</au><au>Futsaether, Cecilia Marie</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>A comparison of methods for fully automatic segmentation of tumors and involved nodes in PET/CT of head and neck cancers</atitle><jtitle>Physics in medicine &amp; biology</jtitle><stitle>PMB</stitle><addtitle>Phys. Med. Biol</addtitle><date>2021-03-21</date><risdate>2021</risdate><volume>66</volume><issue>6</issue><spage>065012</spage><epage>065012</epage><pages>065012-065012</pages><issn>0031-9155</issn><eissn>1361-6560</eissn><coden>PHMBA7</coden><abstract>Target volume delineation is a vital but time-consuming and challenging part of radiotherapy, where the goal is to deliver sufficient dose to the target while reducing risks of side effects. For head and neck cancer (HNC) this is complicated by the complex anatomy of the head and neck region and the proximity of target volumes to organs at risk. The purpose of this study was to compare and evaluate conventional PET thresholding methods, six classical machine learning algorithms and a 2D U-Net convolutional neural network (CNN) for automatic gross tumor volume (GTV) segmentation of HNC in PET/CT images. For the latter two approaches the impact of single versus multimodality input on segmentation quality was also assessed. 197 patients were included in the study. The cohort was split into training and test sets (157 and 40 patients, respectively). Five-fold cross-validation was used on the training set for model comparison and selection. Manual GTV delineations represented the ground truth. Tresholding, classical machine learning and CNN segmentation models were ranked separately according to the cross-validation Sørensen-Dice similarity coefficient (Dice). PET thresholding gave a maximum mean Dice of 0.62, whereas classical machine learning resulted in maximum mean Dice scores of 0.24 (CT) and 0.66 (PET; PET/CT). CNN models obtained maximum mean Dice scores of 0.66 (CT), 0.68 (PET) and 0.74 (PET/CT). The difference in cross-validation Dice between multimodality PET/CT and single modality CNN models was significant (p ≤ 0.0001). The top-ranked PET/CT-based CNN model outperformed the best-performing thresholding and classical machine learning models, giving significantly better segmentations in terms of cross-validation and test set Dice, true positive rate, positive predictive value and surface distance-based metrics (p ≤ 0.0001). Thus, deep learning based on multimodality PET/CT input resulted in superior target coverage and less inclusion of surrounding normal tissue.</abstract><cop>England</cop><pub>IOP Publishing</pub><pmid>33666176</pmid><doi>10.1088/1361-6560/abe553</doi><tpages>19</tpages><orcidid>https://orcid.org/0000-0002-5159-9012</orcidid><orcidid>https://orcid.org/0000-0001-6191-2036</orcidid><orcidid>https://orcid.org/0000-0001-7944-0719</orcidid><orcidid>https://orcid.org/0000-0002-4582-253X</orcidid><orcidid>https://orcid.org/0000-0002-6998-8681</orcidid><orcidid>https://orcid.org/0000-0001-9483-2788</orcidid><orcidid>https://orcid.org/0000-0002-1308-9871</orcidid><orcidid>https://orcid.org/0000-0003-1595-9962</orcidid><orcidid>https://orcid.org/0000-0001-9313-2878</orcidid><orcidid>https://orcid.org/0000-0003-1327-3844</orcidid><orcidid>https://orcid.org/0000-0001-5210-132X</orcidid><orcidid>https://orcid.org/0000-0002-3236-463X</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 0031-9155
ispartof Physics in medicine & biology, 2021-03, Vol.66 (6), p.065012-065012
issn 0031-9155
1361-6560
language eng
recordid cdi_pubmed_primary_33666176
source MEDLINE; IOP Publishing Journals; Institute of Physics (IOP) Journals - HEAL-Link
subjects automatic segmentation
deep learning
gross tumor volume
head and neck cancer
Head and Neck Neoplasms - diagnostic imaging
Humans
Image Processing, Computer-Assisted - methods
machine learning
Neural Networks, Computer
PET/CT
Positron Emission Tomography Computed Tomography - methods
thresholding
title A comparison of methods for fully automatic segmentation of tumors and involved nodes in PET/CT of head and neck cancers
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-14T11%3A39%3A51IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_pubme&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=A%20comparison%20of%20methods%20for%20fully%20automatic%20segmentation%20of%20tumors%20and%20involved%20nodes%20in%20PET/CT%20of%20head%20and%20neck%20cancers&rft.jtitle=Physics%20in%20medicine%20&%20biology&rft.au=Groendahl,%20Aurora%20Rosvoll&rft.date=2021-03-21&rft.volume=66&rft.issue=6&rft.spage=065012&rft.epage=065012&rft.pages=065012-065012&rft.issn=0031-9155&rft.eissn=1361-6560&rft.coden=PHMBA7&rft_id=info:doi/10.1088/1361-6560/abe553&rft_dat=%3Cproquest_pubme%3E2498484632%3C/proquest_pubme%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2498484632&rft_id=info:pmid/33666176&rfr_iscdi=true