Automatic Segmentation of Vestibular Schwannoma from T2-Weighted MRI by Deep Spatial Attention with Hardness-Weighted Loss
Automatic segmentation of vestibular schwannoma (VS) tumors from magnetic resonance imaging (MRI) would facilitate efficient and accurate volume measurement to guide patient management and improve clinical workflow. The accuracy and robustness is challenged by low contrast, small target region and l...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2019-06 |
---|---|
Hauptverfasser: | , , , , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Wang, Guotai Shapey, Jonathan Li, Wenqi Dorent, Reuben Demitriadis, Alex Bisdas, Sotirios Paddick, Ian Bradford, Robert Ourselin, Sebastien Vercauteren, Tom |
description | Automatic segmentation of vestibular schwannoma (VS) tumors from magnetic resonance imaging (MRI) would facilitate efficient and accurate volume measurement to guide patient management and improve clinical workflow. The accuracy and robustness is challenged by low contrast, small target region and low through-plane resolution. We introduce a 2.5D convolutional neural network (CNN) able to exploit the different in-plane and through-plane resolutions encountered in standard of care imaging protocols. We use an attention module to enable the CNN to focus on the small target and propose a supervision on the learning of attention maps for more accurate segmentation. Additionally, we propose a hardness-weighted Dice loss function that gives higher weights to harder voxels to boost the training of CNNs. Experiments with ablation studies on the VS tumor segmentation task show that: 1) the proposed 2.5D CNN outperforms its 2D and 3D counterparts, 2) our supervised attention mechanism outperforms unsupervised attention, 3) the voxel-level hardness-weighted Dice loss can improve the performance of CNNs. Our method achieved an average Dice score and ASSD of 0.87 and 0.43~mm respectively. This will facilitate patient management decisions in clinical practice. |
doi_str_mv | 10.48550/arxiv.1906.03906 |
format | Article |
fullrecord | <record><control><sourceid>proquest_arxiv</sourceid><recordid>TN_cdi_arxiv_primary_1906_03906</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2238247676</sourcerecordid><originalsourceid>FETCH-LOGICAL-a526-87f0d8a7671f01883ed3b1546d05546d38764a4e1368e4f692f62252962d1b243</originalsourceid><addsrcrecordid>eNpFUE1PAjEUbExMJMgP8GQTz4vt68eWI8EPSDAmQvS46bItLGE_bLsi_noLmHiZ9w4z8-YNQjeUDLkSgtxr911-DemIyCFhES9QDxijieIAV2jg_ZYQAjIFIVgP_Yy70FQ6lCu8MOvK1CHuTY0bi9-ND2Xe7bTDi9Vmr-s6ErF1TYWXkHyYcr0JpsAvbzOcH_CDMS1etFGtd3gcQnQ6-uzLsMFT7YraeP8vmjfeX6NLq3feDP5mHy2fHpeTaTJ_fZ5NxvNEC5CJSi0plE5lSi2hSjFTsJwKLgsijshUKrnmhjKpDLdyBFYCCBhJKGgOnPXR7dn2VEzWurLS7pAdC8pOBUXG3ZnRuuazi19n26ZzdcyUATAFPB6X7BcrPWgr</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2238247676</pqid></control><display><type>article</type><title>Automatic Segmentation of Vestibular Schwannoma from T2-Weighted MRI by Deep Spatial Attention with Hardness-Weighted Loss</title><source>arXiv.org</source><source>Free E- Journals</source><creator>Wang, Guotai ; Shapey, Jonathan ; Li, Wenqi ; Dorent, Reuben ; Demitriadis, Alex ; Bisdas, Sotirios ; Paddick, Ian ; Bradford, Robert ; Ourselin, Sebastien ; Vercauteren, Tom</creator><creatorcontrib>Wang, Guotai ; Shapey, Jonathan ; Li, Wenqi ; Dorent, Reuben ; Demitriadis, Alex ; Bisdas, Sotirios ; Paddick, Ian ; Bradford, Robert ; Ourselin, Sebastien ; Vercauteren, Tom</creatorcontrib><description>Automatic segmentation of vestibular schwannoma (VS) tumors from magnetic resonance imaging (MRI) would facilitate efficient and accurate volume measurement to guide patient management and improve clinical workflow. The accuracy and robustness is challenged by low contrast, small target region and low through-plane resolution. We introduce a 2.5D convolutional neural network (CNN) able to exploit the different in-plane and through-plane resolutions encountered in standard of care imaging protocols. We use an attention module to enable the CNN to focus on the small target and propose a supervision on the learning of attention maps for more accurate segmentation. Additionally, we propose a hardness-weighted Dice loss function that gives higher weights to harder voxels to boost the training of CNNs. Experiments with ablation studies on the VS tumor segmentation task show that: 1) the proposed 2.5D CNN outperforms its 2D and 3D counterparts, 2) our supervised attention mechanism outperforms unsupervised attention, 3) the voxel-level hardness-weighted Dice loss can improve the performance of CNNs. Our method achieved an average Dice score and ASSD of 0.87 and 0.43~mm respectively. This will facilitate patient management decisions in clinical practice.</description><identifier>EISSN: 2331-8422</identifier><identifier>DOI: 10.48550/arxiv.1906.03906</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Ablation ; Artificial neural networks ; Computer Science - Computer Vision and Pattern Recognition ; Hardness ; Image segmentation ; Magnetic resonance imaging ; NMR ; Nuclear magnetic resonance ; Performance enhancement ; Protocol (computers) ; Tumors ; Volume measurement ; Workflow</subject><ispartof>arXiv.org, 2019-06</ispartof><rights>2019. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,780,881,27904</link.rule.ids><backlink>$$Uhttps://doi.org/10.48550/arXiv.1906.03906$$DView paper in arXiv$$Hfree_for_read</backlink><backlink>$$Uhttps://doi.org/10.1007/978-3-030-32245-8_30$$DView published paper (Access to full text may be restricted)$$Hfree_for_read</backlink></links><search><creatorcontrib>Wang, Guotai</creatorcontrib><creatorcontrib>Shapey, Jonathan</creatorcontrib><creatorcontrib>Li, Wenqi</creatorcontrib><creatorcontrib>Dorent, Reuben</creatorcontrib><creatorcontrib>Demitriadis, Alex</creatorcontrib><creatorcontrib>Bisdas, Sotirios</creatorcontrib><creatorcontrib>Paddick, Ian</creatorcontrib><creatorcontrib>Bradford, Robert</creatorcontrib><creatorcontrib>Ourselin, Sebastien</creatorcontrib><creatorcontrib>Vercauteren, Tom</creatorcontrib><title>Automatic Segmentation of Vestibular Schwannoma from T2-Weighted MRI by Deep Spatial Attention with Hardness-Weighted Loss</title><title>arXiv.org</title><description>Automatic segmentation of vestibular schwannoma (VS) tumors from magnetic resonance imaging (MRI) would facilitate efficient and accurate volume measurement to guide patient management and improve clinical workflow. The accuracy and robustness is challenged by low contrast, small target region and low through-plane resolution. We introduce a 2.5D convolutional neural network (CNN) able to exploit the different in-plane and through-plane resolutions encountered in standard of care imaging protocols. We use an attention module to enable the CNN to focus on the small target and propose a supervision on the learning of attention maps for more accurate segmentation. Additionally, we propose a hardness-weighted Dice loss function that gives higher weights to harder voxels to boost the training of CNNs. Experiments with ablation studies on the VS tumor segmentation task show that: 1) the proposed 2.5D CNN outperforms its 2D and 3D counterparts, 2) our supervised attention mechanism outperforms unsupervised attention, 3) the voxel-level hardness-weighted Dice loss can improve the performance of CNNs. Our method achieved an average Dice score and ASSD of 0.87 and 0.43~mm respectively. This will facilitate patient management decisions in clinical practice.</description><subject>Ablation</subject><subject>Artificial neural networks</subject><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Hardness</subject><subject>Image segmentation</subject><subject>Magnetic resonance imaging</subject><subject>NMR</subject><subject>Nuclear magnetic resonance</subject><subject>Performance enhancement</subject><subject>Protocol (computers)</subject><subject>Tumors</subject><subject>Volume measurement</subject><subject>Workflow</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GOX</sourceid><recordid>eNpFUE1PAjEUbExMJMgP8GQTz4vt68eWI8EPSDAmQvS46bItLGE_bLsi_noLmHiZ9w4z8-YNQjeUDLkSgtxr911-DemIyCFhES9QDxijieIAV2jg_ZYQAjIFIVgP_Yy70FQ6lCu8MOvK1CHuTY0bi9-ND2Xe7bTDi9Vmr-s6ErF1TYWXkHyYcr0JpsAvbzOcH_CDMS1etFGtd3gcQnQ6-uzLsMFT7YraeP8vmjfeX6NLq3feDP5mHy2fHpeTaTJ_fZ5NxvNEC5CJSi0plE5lSi2hSjFTsJwKLgsijshUKrnmhjKpDLdyBFYCCBhJKGgOnPXR7dn2VEzWurLS7pAdC8pOBUXG3ZnRuuazi19n26ZzdcyUATAFPB6X7BcrPWgr</recordid><startdate>20190610</startdate><enddate>20190610</enddate><creator>Wang, Guotai</creator><creator>Shapey, Jonathan</creator><creator>Li, Wenqi</creator><creator>Dorent, Reuben</creator><creator>Demitriadis, Alex</creator><creator>Bisdas, Sotirios</creator><creator>Paddick, Ian</creator><creator>Bradford, Robert</creator><creator>Ourselin, Sebastien</creator><creator>Vercauteren, Tom</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20190610</creationdate><title>Automatic Segmentation of Vestibular Schwannoma from T2-Weighted MRI by Deep Spatial Attention with Hardness-Weighted Loss</title><author>Wang, Guotai ; Shapey, Jonathan ; Li, Wenqi ; Dorent, Reuben ; Demitriadis, Alex ; Bisdas, Sotirios ; Paddick, Ian ; Bradford, Robert ; Ourselin, Sebastien ; Vercauteren, Tom</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a526-87f0d8a7671f01883ed3b1546d05546d38764a4e1368e4f692f62252962d1b243</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Ablation</topic><topic>Artificial neural networks</topic><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Hardness</topic><topic>Image segmentation</topic><topic>Magnetic resonance imaging</topic><topic>NMR</topic><topic>Nuclear magnetic resonance</topic><topic>Performance enhancement</topic><topic>Protocol (computers)</topic><topic>Tumors</topic><topic>Volume measurement</topic><topic>Workflow</topic><toplevel>online_resources</toplevel><creatorcontrib>Wang, Guotai</creatorcontrib><creatorcontrib>Shapey, Jonathan</creatorcontrib><creatorcontrib>Li, Wenqi</creatorcontrib><creatorcontrib>Dorent, Reuben</creatorcontrib><creatorcontrib>Demitriadis, Alex</creatorcontrib><creatorcontrib>Bisdas, Sotirios</creatorcontrib><creatorcontrib>Paddick, Ian</creatorcontrib><creatorcontrib>Bradford, Robert</creatorcontrib><creatorcontrib>Ourselin, Sebastien</creatorcontrib><creatorcontrib>Vercauteren, Tom</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><collection>arXiv Computer Science</collection><collection>arXiv.org</collection><jtitle>arXiv.org</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Wang, Guotai</au><au>Shapey, Jonathan</au><au>Li, Wenqi</au><au>Dorent, Reuben</au><au>Demitriadis, Alex</au><au>Bisdas, Sotirios</au><au>Paddick, Ian</au><au>Bradford, Robert</au><au>Ourselin, Sebastien</au><au>Vercauteren, Tom</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Automatic Segmentation of Vestibular Schwannoma from T2-Weighted MRI by Deep Spatial Attention with Hardness-Weighted Loss</atitle><jtitle>arXiv.org</jtitle><date>2019-06-10</date><risdate>2019</risdate><eissn>2331-8422</eissn><abstract>Automatic segmentation of vestibular schwannoma (VS) tumors from magnetic resonance imaging (MRI) would facilitate efficient and accurate volume measurement to guide patient management and improve clinical workflow. The accuracy and robustness is challenged by low contrast, small target region and low through-plane resolution. We introduce a 2.5D convolutional neural network (CNN) able to exploit the different in-plane and through-plane resolutions encountered in standard of care imaging protocols. We use an attention module to enable the CNN to focus on the small target and propose a supervision on the learning of attention maps for more accurate segmentation. Additionally, we propose a hardness-weighted Dice loss function that gives higher weights to harder voxels to boost the training of CNNs. Experiments with ablation studies on the VS tumor segmentation task show that: 1) the proposed 2.5D CNN outperforms its 2D and 3D counterparts, 2) our supervised attention mechanism outperforms unsupervised attention, 3) the voxel-level hardness-weighted Dice loss can improve the performance of CNNs. Our method achieved an average Dice score and ASSD of 0.87 and 0.43~mm respectively. This will facilitate patient management decisions in clinical practice.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><doi>10.48550/arxiv.1906.03906</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2019-06 |
issn | 2331-8422 |
language | eng |
recordid | cdi_arxiv_primary_1906_03906 |
source | arXiv.org; Free E- Journals |
subjects | Ablation Artificial neural networks Computer Science - Computer Vision and Pattern Recognition Hardness Image segmentation Magnetic resonance imaging NMR Nuclear magnetic resonance Performance enhancement Protocol (computers) Tumors Volume measurement Workflow |
title | Automatic Segmentation of Vestibular Schwannoma from T2-Weighted MRI by Deep Spatial Attention with Hardness-Weighted Loss |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-27T00%3A07%3A22IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_arxiv&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Automatic%20Segmentation%20of%20Vestibular%20Schwannoma%20from%20T2-Weighted%20MRI%20by%20Deep%20Spatial%20Attention%20with%20Hardness-Weighted%20Loss&rft.jtitle=arXiv.org&rft.au=Wang,%20Guotai&rft.date=2019-06-10&rft.eissn=2331-8422&rft_id=info:doi/10.48550/arxiv.1906.03906&rft_dat=%3Cproquest_arxiv%3E2238247676%3C/proquest_arxiv%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2238247676&rft_id=info:pmid/&rfr_iscdi=true |