Anatomical Partition‐Based Deep Learning: An Automatic Nasopharyngeal MRI Recognition Scheme
Background Training deep learning (DL) models to automatically recognize diseases in nasopharyngeal MRI is a challenging task, and optimizing the performance of DL models is difficult. Purpose To develop a method of training anatomical partition‐based DL model which integrates knowledge of clinical...
Gespeichert in:
Veröffentlicht in: | Journal of magnetic resonance imaging 2022-10, Vol.56 (4), p.1220-1229 |
---|---|
Hauptverfasser: | , , , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 1229 |
---|---|
container_issue | 4 |
container_start_page | 1220 |
container_title | Journal of magnetic resonance imaging |
container_volume | 56 |
creator | Li, Song Hua, Hong‐Li Li, Fen Kong, Yong‐Gang Zhu, Zhi‐Ling Li, Sheng‐Lan Chen, Xi‐Xiang Deng, Yu‐Qin Tao, Ze‐Zhang |
description | Background
Training deep learning (DL) models to automatically recognize diseases in nasopharyngeal MRI is a challenging task, and optimizing the performance of DL models is difficult.
Purpose
To develop a method of training anatomical partition‐based DL model which integrates knowledge of clinical anatomical regions in otorhinolaryngology to automatically recognize diseases in nasopharyngeal MRI.
Study Type
Single‐center retrospective study.
Population
A total of 2485 patients with nasopharyngeal diseases (age range 14–82 years, female, 779[31.3%]) and 600 people with normal nasopharynx (age range 18–78 years, female, 281[46.8%]) were included.
Sequence
3.0 T; T2WI fast spin‐echo sequence.
Assessment
Full images (512 × 512) of 3085 patients constituted 100% of the dataset, 50% and 25% of which were randomly retained as two new datasets. Two new series of images (seg112 image [112 × 112] and seg224 image [224 × 224]) were automatically generated by a segmentation model. Four pretrained neural networks for nasopharyngeal diseases classification were trained under the nine datasets (full image, seg112 image, and seg224 image, each with 100% dataset, 50% dataset, and 25% dataset).
Statistical Tests
The receiver operating characteristic curve was used to evaluate the performance of the models. Analysis of variance was used to compare the performance of the models built with different datasets. Statistical significance was set at P |
doi_str_mv | 10.1002/jmri.28112 |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_2629057068</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2629057068</sourcerecordid><originalsourceid>FETCH-LOGICAL-c3932-2f64b71e5f87c987115679f08e99ef87270cb735dfa7a5c8780f272cf3437f083</originalsourceid><addsrcrecordid>eNp9kElOwzAUQC0EYt5wABSJDUJq8RDHDrsyg8oghi2W6_4UV4lT7ESIHUfgjJwEtwUWLFjZsp6f_n8IbRHcJRjT_XHlbZdKQugCWiWc0g7lMluMd8xZh0gsVtBaCGOMcZ6nfBmtME64EJKuoqee001dWaPL5Fb7xja2dp_vH4c6wDA5BpgkfdDeWTc6SHou6bWR1o01ybUO9eRZ-zc3gvj56u4iuQNTj9xMkdybZ6hgAy0Vugyw-X2uo8fTk4ej807_5uziqNfvGJazOG-RpQNBgBdSmFwKQngm8gJLyHOIb1RgMxCMDwstNDdSSFxQQU3BUiYixtbR7tw78fVLC6FRlQ0GylI7qNugaEZzzAXOpujOH3Rct97F6RQVJI0dWZZGam9OGV-H4KFQE2-ruK0iWE2rq2l1Nase4e1vZTuoYPiL_mSOAJkDr7aEt39U6jJ2nEu_AMHejBY</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2714281364</pqid></control><display><type>article</type><title>Anatomical Partition‐Based Deep Learning: An Automatic Nasopharyngeal MRI Recognition Scheme</title><source>Wiley Online Library All Journals</source><creator>Li, Song ; Hua, Hong‐Li ; Li, Fen ; Kong, Yong‐Gang ; Zhu, Zhi‐Ling ; Li, Sheng‐Lan ; Chen, Xi‐Xiang ; Deng, Yu‐Qin ; Tao, Ze‐Zhang</creator><creatorcontrib>Li, Song ; Hua, Hong‐Li ; Li, Fen ; Kong, Yong‐Gang ; Zhu, Zhi‐Ling ; Li, Sheng‐Lan ; Chen, Xi‐Xiang ; Deng, Yu‐Qin ; Tao, Ze‐Zhang</creatorcontrib><description>Background
Training deep learning (DL) models to automatically recognize diseases in nasopharyngeal MRI is a challenging task, and optimizing the performance of DL models is difficult.
Purpose
To develop a method of training anatomical partition‐based DL model which integrates knowledge of clinical anatomical regions in otorhinolaryngology to automatically recognize diseases in nasopharyngeal MRI.
Study Type
Single‐center retrospective study.
Population
A total of 2485 patients with nasopharyngeal diseases (age range 14–82 years, female, 779[31.3%]) and 600 people with normal nasopharynx (age range 18–78 years, female, 281[46.8%]) were included.
Sequence
3.0 T; T2WI fast spin‐echo sequence.
Assessment
Full images (512 × 512) of 3085 patients constituted 100% of the dataset, 50% and 25% of which were randomly retained as two new datasets. Two new series of images (seg112 image [112 × 112] and seg224 image [224 × 224]) were automatically generated by a segmentation model. Four pretrained neural networks for nasopharyngeal diseases classification were trained under the nine datasets (full image, seg112 image, and seg224 image, each with 100% dataset, 50% dataset, and 25% dataset).
Statistical Tests
The receiver operating characteristic curve was used to evaluate the performance of the models. Analysis of variance was used to compare the performance of the models built with different datasets. Statistical significance was set at P < 0.05.
Results
When the 100% dataset was used for training, the performances of the models trained with the seg112 images (average area under the curve [aAUC] 0.949 ± 0.052), seg224 images (aAUC 0.948 ± 0.053), and full images (aAUC 0.935 ± 0.053) were similar (P = 0.611). When the 25% dataset was used for training, the mean aAUC of the models that were trained with seg112 images (0.823 ± 0.116) and seg224 images (0.765 ± 0.155) was significantly higher than the models that were trained with full images (0.640 ± 0.154).
Data Conclusion
The proposed method can potentially improve the performance of the DL model for automatic recognition of diseases in nasopharyngeal MRI.
Level of Evidence
4
Technical Efficacy Stage
1</description><identifier>ISSN: 1053-1807</identifier><identifier>EISSN: 1522-2586</identifier><identifier>DOI: 10.1002/jmri.28112</identifier><identifier>PMID: 35157782</identifier><language>eng</language><publisher>Hoboken, USA: John Wiley & Sons, Inc</publisher><subject>anatomical partition ; automatic segmentation ; Datasets ; Deep learning ; Diseases ; Females ; Image classification ; Image segmentation ; Machine learning ; Magnetic resonance imaging ; Medical imaging ; MRI recognition ; nasopharyngeal region ; Nasopharynx ; Neural networks ; Otolaryngology ; Patients ; Performance enhancement ; Performance evaluation ; Population studies ; Recognition ; Statistical analysis ; Statistical tests ; Training ; Variance analysis</subject><ispartof>Journal of magnetic resonance imaging, 2022-10, Vol.56 (4), p.1220-1229</ispartof><rights>2022 The Authors. published by Wiley Periodicals LLC on behalf of International Society for Magnetic Resonance in Medicine.</rights><rights>2022 The Authors. Journal of Magnetic Resonance Imaging published by Wiley Periodicals LLC on behalf of International Society for Magnetic Resonance in Medicine.</rights><rights>2022. This article is published under http://creativecommons.org/licenses/by-nc/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c3932-2f64b71e5f87c987115679f08e99ef87270cb735dfa7a5c8780f272cf3437f083</citedby><cites>FETCH-LOGICAL-c3932-2f64b71e5f87c987115679f08e99ef87270cb735dfa7a5c8780f272cf3437f083</cites><orcidid>0000-0002-5404-4186</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://onlinelibrary.wiley.com/doi/pdf/10.1002%2Fjmri.28112$$EPDF$$P50$$Gwiley$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://onlinelibrary.wiley.com/doi/full/10.1002%2Fjmri.28112$$EHTML$$P50$$Gwiley$$Hfree_for_read</linktohtml><link.rule.ids>314,776,780,1411,27901,27902,45550,45551</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/35157782$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Li, Song</creatorcontrib><creatorcontrib>Hua, Hong‐Li</creatorcontrib><creatorcontrib>Li, Fen</creatorcontrib><creatorcontrib>Kong, Yong‐Gang</creatorcontrib><creatorcontrib>Zhu, Zhi‐Ling</creatorcontrib><creatorcontrib>Li, Sheng‐Lan</creatorcontrib><creatorcontrib>Chen, Xi‐Xiang</creatorcontrib><creatorcontrib>Deng, Yu‐Qin</creatorcontrib><creatorcontrib>Tao, Ze‐Zhang</creatorcontrib><title>Anatomical Partition‐Based Deep Learning: An Automatic Nasopharyngeal MRI Recognition Scheme</title><title>Journal of magnetic resonance imaging</title><addtitle>J Magn Reson Imaging</addtitle><description>Background
Training deep learning (DL) models to automatically recognize diseases in nasopharyngeal MRI is a challenging task, and optimizing the performance of DL models is difficult.
Purpose
To develop a method of training anatomical partition‐based DL model which integrates knowledge of clinical anatomical regions in otorhinolaryngology to automatically recognize diseases in nasopharyngeal MRI.
Study Type
Single‐center retrospective study.
Population
A total of 2485 patients with nasopharyngeal diseases (age range 14–82 years, female, 779[31.3%]) and 600 people with normal nasopharynx (age range 18–78 years, female, 281[46.8%]) were included.
Sequence
3.0 T; T2WI fast spin‐echo sequence.
Assessment
Full images (512 × 512) of 3085 patients constituted 100% of the dataset, 50% and 25% of which were randomly retained as two new datasets. Two new series of images (seg112 image [112 × 112] and seg224 image [224 × 224]) were automatically generated by a segmentation model. Four pretrained neural networks for nasopharyngeal diseases classification were trained under the nine datasets (full image, seg112 image, and seg224 image, each with 100% dataset, 50% dataset, and 25% dataset).
Statistical Tests
The receiver operating characteristic curve was used to evaluate the performance of the models. Analysis of variance was used to compare the performance of the models built with different datasets. Statistical significance was set at P < 0.05.
Results
When the 100% dataset was used for training, the performances of the models trained with the seg112 images (average area under the curve [aAUC] 0.949 ± 0.052), seg224 images (aAUC 0.948 ± 0.053), and full images (aAUC 0.935 ± 0.053) were similar (P = 0.611). When the 25% dataset was used for training, the mean aAUC of the models that were trained with seg112 images (0.823 ± 0.116) and seg224 images (0.765 ± 0.155) was significantly higher than the models that were trained with full images (0.640 ± 0.154).
Data Conclusion
The proposed method can potentially improve the performance of the DL model for automatic recognition of diseases in nasopharyngeal MRI.
Level of Evidence
4
Technical Efficacy Stage
1</description><subject>anatomical partition</subject><subject>automatic segmentation</subject><subject>Datasets</subject><subject>Deep learning</subject><subject>Diseases</subject><subject>Females</subject><subject>Image classification</subject><subject>Image segmentation</subject><subject>Machine learning</subject><subject>Magnetic resonance imaging</subject><subject>Medical imaging</subject><subject>MRI recognition</subject><subject>nasopharyngeal region</subject><subject>Nasopharynx</subject><subject>Neural networks</subject><subject>Otolaryngology</subject><subject>Patients</subject><subject>Performance enhancement</subject><subject>Performance evaluation</subject><subject>Population studies</subject><subject>Recognition</subject><subject>Statistical analysis</subject><subject>Statistical tests</subject><subject>Training</subject><subject>Variance analysis</subject><issn>1053-1807</issn><issn>1522-2586</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>24P</sourceid><recordid>eNp9kElOwzAUQC0EYt5wABSJDUJq8RDHDrsyg8oghi2W6_4UV4lT7ESIHUfgjJwEtwUWLFjZsp6f_n8IbRHcJRjT_XHlbZdKQugCWiWc0g7lMluMd8xZh0gsVtBaCGOMcZ6nfBmtME64EJKuoqee001dWaPL5Fb7xja2dp_vH4c6wDA5BpgkfdDeWTc6SHou6bWR1o01ybUO9eRZ-zc3gvj56u4iuQNTj9xMkdybZ6hgAy0Vugyw-X2uo8fTk4ej807_5uziqNfvGJazOG-RpQNBgBdSmFwKQngm8gJLyHOIb1RgMxCMDwstNDdSSFxQQU3BUiYixtbR7tw78fVLC6FRlQ0GylI7qNugaEZzzAXOpujOH3Rct97F6RQVJI0dWZZGam9OGV-H4KFQE2-ruK0iWE2rq2l1Nase4e1vZTuoYPiL_mSOAJkDr7aEt39U6jJ2nEu_AMHejBY</recordid><startdate>202210</startdate><enddate>202210</enddate><creator>Li, Song</creator><creator>Hua, Hong‐Li</creator><creator>Li, Fen</creator><creator>Kong, Yong‐Gang</creator><creator>Zhu, Zhi‐Ling</creator><creator>Li, Sheng‐Lan</creator><creator>Chen, Xi‐Xiang</creator><creator>Deng, Yu‐Qin</creator><creator>Tao, Ze‐Zhang</creator><general>John Wiley & Sons, Inc</general><general>Wiley Subscription Services, Inc</general><scope>24P</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7QO</scope><scope>7TK</scope><scope>8FD</scope><scope>FR3</scope><scope>K9.</scope><scope>P64</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0002-5404-4186</orcidid></search><sort><creationdate>202210</creationdate><title>Anatomical Partition‐Based Deep Learning: An Automatic Nasopharyngeal MRI Recognition Scheme</title><author>Li, Song ; Hua, Hong‐Li ; Li, Fen ; Kong, Yong‐Gang ; Zhu, Zhi‐Ling ; Li, Sheng‐Lan ; Chen, Xi‐Xiang ; Deng, Yu‐Qin ; Tao, Ze‐Zhang</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c3932-2f64b71e5f87c987115679f08e99ef87270cb735dfa7a5c8780f272cf3437f083</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>anatomical partition</topic><topic>automatic segmentation</topic><topic>Datasets</topic><topic>Deep learning</topic><topic>Diseases</topic><topic>Females</topic><topic>Image classification</topic><topic>Image segmentation</topic><topic>Machine learning</topic><topic>Magnetic resonance imaging</topic><topic>Medical imaging</topic><topic>MRI recognition</topic><topic>nasopharyngeal region</topic><topic>Nasopharynx</topic><topic>Neural networks</topic><topic>Otolaryngology</topic><topic>Patients</topic><topic>Performance enhancement</topic><topic>Performance evaluation</topic><topic>Population studies</topic><topic>Recognition</topic><topic>Statistical analysis</topic><topic>Statistical tests</topic><topic>Training</topic><topic>Variance analysis</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Li, Song</creatorcontrib><creatorcontrib>Hua, Hong‐Li</creatorcontrib><creatorcontrib>Li, Fen</creatorcontrib><creatorcontrib>Kong, Yong‐Gang</creatorcontrib><creatorcontrib>Zhu, Zhi‐Ling</creatorcontrib><creatorcontrib>Li, Sheng‐Lan</creatorcontrib><creatorcontrib>Chen, Xi‐Xiang</creatorcontrib><creatorcontrib>Deng, Yu‐Qin</creatorcontrib><creatorcontrib>Tao, Ze‐Zhang</creatorcontrib><collection>Wiley-Blackwell Open Access Titles</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>Biotechnology Research Abstracts</collection><collection>Neurosciences Abstracts</collection><collection>Technology Research Database</collection><collection>Engineering Research Database</collection><collection>ProQuest Health & Medical Complete (Alumni)</collection><collection>Biotechnology and BioEngineering Abstracts</collection><collection>MEDLINE - Academic</collection><jtitle>Journal of magnetic resonance imaging</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Li, Song</au><au>Hua, Hong‐Li</au><au>Li, Fen</au><au>Kong, Yong‐Gang</au><au>Zhu, Zhi‐Ling</au><au>Li, Sheng‐Lan</au><au>Chen, Xi‐Xiang</au><au>Deng, Yu‐Qin</au><au>Tao, Ze‐Zhang</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Anatomical Partition‐Based Deep Learning: An Automatic Nasopharyngeal MRI Recognition Scheme</atitle><jtitle>Journal of magnetic resonance imaging</jtitle><addtitle>J Magn Reson Imaging</addtitle><date>2022-10</date><risdate>2022</risdate><volume>56</volume><issue>4</issue><spage>1220</spage><epage>1229</epage><pages>1220-1229</pages><issn>1053-1807</issn><eissn>1522-2586</eissn><abstract>Background
Training deep learning (DL) models to automatically recognize diseases in nasopharyngeal MRI is a challenging task, and optimizing the performance of DL models is difficult.
Purpose
To develop a method of training anatomical partition‐based DL model which integrates knowledge of clinical anatomical regions in otorhinolaryngology to automatically recognize diseases in nasopharyngeal MRI.
Study Type
Single‐center retrospective study.
Population
A total of 2485 patients with nasopharyngeal diseases (age range 14–82 years, female, 779[31.3%]) and 600 people with normal nasopharynx (age range 18–78 years, female, 281[46.8%]) were included.
Sequence
3.0 T; T2WI fast spin‐echo sequence.
Assessment
Full images (512 × 512) of 3085 patients constituted 100% of the dataset, 50% and 25% of which were randomly retained as two new datasets. Two new series of images (seg112 image [112 × 112] and seg224 image [224 × 224]) were automatically generated by a segmentation model. Four pretrained neural networks for nasopharyngeal diseases classification were trained under the nine datasets (full image, seg112 image, and seg224 image, each with 100% dataset, 50% dataset, and 25% dataset).
Statistical Tests
The receiver operating characteristic curve was used to evaluate the performance of the models. Analysis of variance was used to compare the performance of the models built with different datasets. Statistical significance was set at P < 0.05.
Results
When the 100% dataset was used for training, the performances of the models trained with the seg112 images (average area under the curve [aAUC] 0.949 ± 0.052), seg224 images (aAUC 0.948 ± 0.053), and full images (aAUC 0.935 ± 0.053) were similar (P = 0.611). When the 25% dataset was used for training, the mean aAUC of the models that were trained with seg112 images (0.823 ± 0.116) and seg224 images (0.765 ± 0.155) was significantly higher than the models that were trained with full images (0.640 ± 0.154).
Data Conclusion
The proposed method can potentially improve the performance of the DL model for automatic recognition of diseases in nasopharyngeal MRI.
Level of Evidence
4
Technical Efficacy Stage
1</abstract><cop>Hoboken, USA</cop><pub>John Wiley & Sons, Inc</pub><pmid>35157782</pmid><doi>10.1002/jmri.28112</doi><tpages>10</tpages><orcidid>https://orcid.org/0000-0002-5404-4186</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1053-1807 |
ispartof | Journal of magnetic resonance imaging, 2022-10, Vol.56 (4), p.1220-1229 |
issn | 1053-1807 1522-2586 |
language | eng |
recordid | cdi_proquest_miscellaneous_2629057068 |
source | Wiley Online Library All Journals |
subjects | anatomical partition automatic segmentation Datasets Deep learning Diseases Females Image classification Image segmentation Machine learning Magnetic resonance imaging Medical imaging MRI recognition nasopharyngeal region Nasopharynx Neural networks Otolaryngology Patients Performance enhancement Performance evaluation Population studies Recognition Statistical analysis Statistical tests Training Variance analysis |
title | Anatomical Partition‐Based Deep Learning: An Automatic Nasopharyngeal MRI Recognition Scheme |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-05T20%3A59%3A42IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Anatomical%20Partition%E2%80%90Based%20Deep%20Learning:%20An%20Automatic%20Nasopharyngeal%20MRI%20Recognition%20Scheme&rft.jtitle=Journal%20of%20magnetic%20resonance%20imaging&rft.au=Li,%20Song&rft.date=2022-10&rft.volume=56&rft.issue=4&rft.spage=1220&rft.epage=1229&rft.pages=1220-1229&rft.issn=1053-1807&rft.eissn=1522-2586&rft_id=info:doi/10.1002/jmri.28112&rft_dat=%3Cproquest_cross%3E2629057068%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2714281364&rft_id=info:pmid/35157782&rfr_iscdi=true |