Automatic Detection and Segmentation of Breast Cancer on MRI Using Mask R-CNN Trained on Non–Fat-Sat Images and Tested on Fat-Sat Images
Computer-aided methods have been widely applied to diagnose lesions on breast magnetic resonance imaging (MRI). The first step was to identify abnormal areas. A deep learning Mask Regional Convolutional Neural Network (R-CNN) was implemented to search the entire set of images and detect suspicious l...
Gespeichert in:
Veröffentlicht in: | Academic radiology 2022-01, Vol.29 (Suppl 1), p.S135-S144 |
---|---|
Hauptverfasser: | , , , , , , , , , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | S144 |
---|---|
container_issue | Suppl 1 |
container_start_page | S135 |
container_title | Academic radiology |
container_volume | 29 |
creator | Zhang, Yang Chan, Siwa Park, Vivian Youngjean Chang, Kai-Ting Mehta, Siddharth Kim, Min Jung Combs, Freddie J. Chang, Peter Chow, Daniel Parajuli, Ritesh Mehta, Rita S. Lin, Chin-Yao Chien, Sou-Hsin Chen, Jeon-Hor Su, Min-Ying |
description | Computer-aided methods have been widely applied to diagnose lesions on breast magnetic resonance imaging (MRI). The first step was to identify abnormal areas. A deep learning Mask Regional Convolutional Neural Network (R-CNN) was implemented to search the entire set of images and detect suspicious lesions.
Two DCE-MRI datasets were used, 241 patients acquired using non–fat-sat sequence for training, and 98 patients acquired using fat-sat sequence for testing. All patients have confirmed unilateral mass cancers. The tumor was segmented using fuzzy c-means clustering algorithm to serve as the ground truth. Mask R-CNN was implemented with ResNet-101 as the backbone. The neural network output the bounding boxes and the segmented tumor for evaluation using the Dice Similarity Coefficient (DSC). The detection performance, and the trade-off between sensitivity and specificity, was analyzed using free response receiver operating characteristic.
When the precontrast and subtraction image of both breasts were used as input, the false positive from the heart and normal parenchymal enhancements could be minimized. The training set had 1469 positive slices (containing lesion) and 9135 negative slices. In 10-fold cross-validation, the mean accuracy = 0.86 and DSC = 0.82. The testing dataset had 1568 positive and 7264 negative slices, with accuracy = 0.75 and DSC = 0.79. When the obtained per-slice results were combined, 240 of 241 (99.5%) lesions in the training and 98 of 98 (100%) lesions in the testing datasets were identified.
Deep learning using Mask R-CNN provided a feasible method to search breast MRI, localize, and segment lesions. This may be integrated with other artificial intelligence algorithms to develop a fully automatic breast MRI diagnostic system. |
doi_str_mv | 10.1016/j.acra.2020.12.001 |
format | Article |
fullrecord | <record><control><sourceid>proquest_pubme</sourceid><recordid>TN_cdi_pubmedcentral_primary_oai_pubmedcentral_nih_gov_8192591</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S1076633220306760</els_id><sourcerecordid>2470282341</sourcerecordid><originalsourceid>FETCH-LOGICAL-c455t-5d1c3eca5a8452d5ebdc1999732828b9cd2e6fa0a144a78b90b4cfe8d062d0753</originalsourceid><addsrcrecordid>eNp9kc1uUzEQha8QFS2FF2CBvGRzU__dPwkhlUBppDZIbbq2Jvbc4JBrt7ZTiR1rtrwhT1LfplR0w8rWmTPHM_6K4g2jE0ZZfbSegA4w4ZRngU8oZc-KA9Y2bSmprJ_nO23qshaC7xcvY1xnQ1W34kWxL4RgTcfYQfHreJv8AMlq8gkT6mS9I-AMucTVgC7BveB78jEgxESm4DQGkrXzixm5itatyDnE7-SinM7nZBHAOjRjfe7dn5-_TyCVl5DIbIAVxvvkBca0szwtvir2ethEfP1wHhZXJ58X09Py7OuX2fT4rNSyqlJZGaYFaqiglRU3FS6NZl3XNYK3vF122nCse6DApIQmC3QpdY-toTU3tKnEYfFhl3u9XQ5odN4ywEZdBztA-KE8WPW04uw3tfK3qmUdrzqWA949BAR_s83bqMFGjZsNOPTbqLhsaJ5FyNHKd1YdfIwB-8dnGFUjRLVWI0Q1QlSMq8woN739d8DHlr_UsuH9zoD5m24tBhW1xQzG2JARKuPt__LvAK4cryI</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2470282341</pqid></control><display><type>article</type><title>Automatic Detection and Segmentation of Breast Cancer on MRI Using Mask R-CNN Trained on Non–Fat-Sat Images and Tested on Fat-Sat Images</title><source>MEDLINE</source><source>Elsevier ScienceDirect Journals</source><creator>Zhang, Yang ; Chan, Siwa ; Park, Vivian Youngjean ; Chang, Kai-Ting ; Mehta, Siddharth ; Kim, Min Jung ; Combs, Freddie J. ; Chang, Peter ; Chow, Daniel ; Parajuli, Ritesh ; Mehta, Rita S. ; Lin, Chin-Yao ; Chien, Sou-Hsin ; Chen, Jeon-Hor ; Su, Min-Ying</creator><creatorcontrib>Zhang, Yang ; Chan, Siwa ; Park, Vivian Youngjean ; Chang, Kai-Ting ; Mehta, Siddharth ; Kim, Min Jung ; Combs, Freddie J. ; Chang, Peter ; Chow, Daniel ; Parajuli, Ritesh ; Mehta, Rita S. ; Lin, Chin-Yao ; Chien, Sou-Hsin ; Chen, Jeon-Hor ; Su, Min-Ying</creatorcontrib><description>Computer-aided methods have been widely applied to diagnose lesions on breast magnetic resonance imaging (MRI). The first step was to identify abnormal areas. A deep learning Mask Regional Convolutional Neural Network (R-CNN) was implemented to search the entire set of images and detect suspicious lesions.
Two DCE-MRI datasets were used, 241 patients acquired using non–fat-sat sequence for training, and 98 patients acquired using fat-sat sequence for testing. All patients have confirmed unilateral mass cancers. The tumor was segmented using fuzzy c-means clustering algorithm to serve as the ground truth. Mask R-CNN was implemented with ResNet-101 as the backbone. The neural network output the bounding boxes and the segmented tumor for evaluation using the Dice Similarity Coefficient (DSC). The detection performance, and the trade-off between sensitivity and specificity, was analyzed using free response receiver operating characteristic.
When the precontrast and subtraction image of both breasts were used as input, the false positive from the heart and normal parenchymal enhancements could be minimized. The training set had 1469 positive slices (containing lesion) and 9135 negative slices. In 10-fold cross-validation, the mean accuracy = 0.86 and DSC = 0.82. The testing dataset had 1568 positive and 7264 negative slices, with accuracy = 0.75 and DSC = 0.79. When the obtained per-slice results were combined, 240 of 241 (99.5%) lesions in the training and 98 of 98 (100%) lesions in the testing datasets were identified.
Deep learning using Mask R-CNN provided a feasible method to search breast MRI, localize, and segment lesions. This may be integrated with other artificial intelligence algorithms to develop a fully automatic breast MRI diagnostic system.</description><identifier>ISSN: 1076-6332</identifier><identifier>EISSN: 1878-4046</identifier><identifier>DOI: 10.1016/j.acra.2020.12.001</identifier><identifier>PMID: 33317911</identifier><language>eng</language><publisher>United States: Elsevier Inc</publisher><subject>Artificial Intelligence ; Breast - diagnostic imaging ; Breast - pathology ; Breast MRI ; Breast Neoplasms - diagnostic imaging ; Breast Neoplasms - pathology ; Deep learning ; Female ; Fully-automatic detection ; Humans ; Image Processing, Computer-Assisted ; Magnetic Resonance Imaging ; Mask R-CNN ; Neural Networks, Computer</subject><ispartof>Academic radiology, 2022-01, Vol.29 (Suppl 1), p.S135-S144</ispartof><rights>2020 The Association of University Radiologists</rights><rights>Copyright © 2020 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c455t-5d1c3eca5a8452d5ebdc1999732828b9cd2e6fa0a144a78b90b4cfe8d062d0753</citedby><cites>FETCH-LOGICAL-c455t-5d1c3eca5a8452d5ebdc1999732828b9cd2e6fa0a144a78b90b4cfe8d062d0753</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.sciencedirect.com/science/article/pii/S1076633220306760$$EHTML$$P50$$Gelsevier$$H</linktohtml><link.rule.ids>230,314,776,780,881,3536,27903,27904,65309</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/33317911$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Zhang, Yang</creatorcontrib><creatorcontrib>Chan, Siwa</creatorcontrib><creatorcontrib>Park, Vivian Youngjean</creatorcontrib><creatorcontrib>Chang, Kai-Ting</creatorcontrib><creatorcontrib>Mehta, Siddharth</creatorcontrib><creatorcontrib>Kim, Min Jung</creatorcontrib><creatorcontrib>Combs, Freddie J.</creatorcontrib><creatorcontrib>Chang, Peter</creatorcontrib><creatorcontrib>Chow, Daniel</creatorcontrib><creatorcontrib>Parajuli, Ritesh</creatorcontrib><creatorcontrib>Mehta, Rita S.</creatorcontrib><creatorcontrib>Lin, Chin-Yao</creatorcontrib><creatorcontrib>Chien, Sou-Hsin</creatorcontrib><creatorcontrib>Chen, Jeon-Hor</creatorcontrib><creatorcontrib>Su, Min-Ying</creatorcontrib><title>Automatic Detection and Segmentation of Breast Cancer on MRI Using Mask R-CNN Trained on Non–Fat-Sat Images and Tested on Fat-Sat Images</title><title>Academic radiology</title><addtitle>Acad Radiol</addtitle><description>Computer-aided methods have been widely applied to diagnose lesions on breast magnetic resonance imaging (MRI). The first step was to identify abnormal areas. A deep learning Mask Regional Convolutional Neural Network (R-CNN) was implemented to search the entire set of images and detect suspicious lesions.
Two DCE-MRI datasets were used, 241 patients acquired using non–fat-sat sequence for training, and 98 patients acquired using fat-sat sequence for testing. All patients have confirmed unilateral mass cancers. The tumor was segmented using fuzzy c-means clustering algorithm to serve as the ground truth. Mask R-CNN was implemented with ResNet-101 as the backbone. The neural network output the bounding boxes and the segmented tumor for evaluation using the Dice Similarity Coefficient (DSC). The detection performance, and the trade-off between sensitivity and specificity, was analyzed using free response receiver operating characteristic.
When the precontrast and subtraction image of both breasts were used as input, the false positive from the heart and normal parenchymal enhancements could be minimized. The training set had 1469 positive slices (containing lesion) and 9135 negative slices. In 10-fold cross-validation, the mean accuracy = 0.86 and DSC = 0.82. The testing dataset had 1568 positive and 7264 negative slices, with accuracy = 0.75 and DSC = 0.79. When the obtained per-slice results were combined, 240 of 241 (99.5%) lesions in the training and 98 of 98 (100%) lesions in the testing datasets were identified.
Deep learning using Mask R-CNN provided a feasible method to search breast MRI, localize, and segment lesions. This may be integrated with other artificial intelligence algorithms to develop a fully automatic breast MRI diagnostic system.</description><subject>Artificial Intelligence</subject><subject>Breast - diagnostic imaging</subject><subject>Breast - pathology</subject><subject>Breast MRI</subject><subject>Breast Neoplasms - diagnostic imaging</subject><subject>Breast Neoplasms - pathology</subject><subject>Deep learning</subject><subject>Female</subject><subject>Fully-automatic detection</subject><subject>Humans</subject><subject>Image Processing, Computer-Assisted</subject><subject>Magnetic Resonance Imaging</subject><subject>Mask R-CNN</subject><subject>Neural Networks, Computer</subject><issn>1076-6332</issn><issn>1878-4046</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>EIF</sourceid><recordid>eNp9kc1uUzEQha8QFS2FF2CBvGRzU__dPwkhlUBppDZIbbq2Jvbc4JBrt7ZTiR1rtrwhT1LfplR0w8rWmTPHM_6K4g2jE0ZZfbSegA4w4ZRngU8oZc-KA9Y2bSmprJ_nO23qshaC7xcvY1xnQ1W34kWxL4RgTcfYQfHreJv8AMlq8gkT6mS9I-AMucTVgC7BveB78jEgxESm4DQGkrXzixm5itatyDnE7-SinM7nZBHAOjRjfe7dn5-_TyCVl5DIbIAVxvvkBca0szwtvir2ethEfP1wHhZXJ58X09Py7OuX2fT4rNSyqlJZGaYFaqiglRU3FS6NZl3XNYK3vF122nCse6DApIQmC3QpdY-toTU3tKnEYfFhl3u9XQ5odN4ywEZdBztA-KE8WPW04uw3tfK3qmUdrzqWA949BAR_s83bqMFGjZsNOPTbqLhsaJ5FyNHKd1YdfIwB-8dnGFUjRLVWI0Q1QlSMq8woN739d8DHlr_UsuH9zoD5m24tBhW1xQzG2JARKuPt__LvAK4cryI</recordid><startdate>20220101</startdate><enddate>20220101</enddate><creator>Zhang, Yang</creator><creator>Chan, Siwa</creator><creator>Park, Vivian Youngjean</creator><creator>Chang, Kai-Ting</creator><creator>Mehta, Siddharth</creator><creator>Kim, Min Jung</creator><creator>Combs, Freddie J.</creator><creator>Chang, Peter</creator><creator>Chow, Daniel</creator><creator>Parajuli, Ritesh</creator><creator>Mehta, Rita S.</creator><creator>Lin, Chin-Yao</creator><creator>Chien, Sou-Hsin</creator><creator>Chen, Jeon-Hor</creator><creator>Su, Min-Ying</creator><general>Elsevier Inc</general><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7X8</scope><scope>5PM</scope></search><sort><creationdate>20220101</creationdate><title>Automatic Detection and Segmentation of Breast Cancer on MRI Using Mask R-CNN Trained on Non–Fat-Sat Images and Tested on Fat-Sat Images</title><author>Zhang, Yang ; Chan, Siwa ; Park, Vivian Youngjean ; Chang, Kai-Ting ; Mehta, Siddharth ; Kim, Min Jung ; Combs, Freddie J. ; Chang, Peter ; Chow, Daniel ; Parajuli, Ritesh ; Mehta, Rita S. ; Lin, Chin-Yao ; Chien, Sou-Hsin ; Chen, Jeon-Hor ; Su, Min-Ying</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c455t-5d1c3eca5a8452d5ebdc1999732828b9cd2e6fa0a144a78b90b4cfe8d062d0753</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Artificial Intelligence</topic><topic>Breast - diagnostic imaging</topic><topic>Breast - pathology</topic><topic>Breast MRI</topic><topic>Breast Neoplasms - diagnostic imaging</topic><topic>Breast Neoplasms - pathology</topic><topic>Deep learning</topic><topic>Female</topic><topic>Fully-automatic detection</topic><topic>Humans</topic><topic>Image Processing, Computer-Assisted</topic><topic>Magnetic Resonance Imaging</topic><topic>Mask R-CNN</topic><topic>Neural Networks, Computer</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Zhang, Yang</creatorcontrib><creatorcontrib>Chan, Siwa</creatorcontrib><creatorcontrib>Park, Vivian Youngjean</creatorcontrib><creatorcontrib>Chang, Kai-Ting</creatorcontrib><creatorcontrib>Mehta, Siddharth</creatorcontrib><creatorcontrib>Kim, Min Jung</creatorcontrib><creatorcontrib>Combs, Freddie J.</creatorcontrib><creatorcontrib>Chang, Peter</creatorcontrib><creatorcontrib>Chow, Daniel</creatorcontrib><creatorcontrib>Parajuli, Ritesh</creatorcontrib><creatorcontrib>Mehta, Rita S.</creatorcontrib><creatorcontrib>Lin, Chin-Yao</creatorcontrib><creatorcontrib>Chien, Sou-Hsin</creatorcontrib><creatorcontrib>Chen, Jeon-Hor</creatorcontrib><creatorcontrib>Su, Min-Ying</creatorcontrib><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>MEDLINE - Academic</collection><collection>PubMed Central (Full Participant titles)</collection><jtitle>Academic radiology</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Zhang, Yang</au><au>Chan, Siwa</au><au>Park, Vivian Youngjean</au><au>Chang, Kai-Ting</au><au>Mehta, Siddharth</au><au>Kim, Min Jung</au><au>Combs, Freddie J.</au><au>Chang, Peter</au><au>Chow, Daniel</au><au>Parajuli, Ritesh</au><au>Mehta, Rita S.</au><au>Lin, Chin-Yao</au><au>Chien, Sou-Hsin</au><au>Chen, Jeon-Hor</au><au>Su, Min-Ying</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Automatic Detection and Segmentation of Breast Cancer on MRI Using Mask R-CNN Trained on Non–Fat-Sat Images and Tested on Fat-Sat Images</atitle><jtitle>Academic radiology</jtitle><addtitle>Acad Radiol</addtitle><date>2022-01-01</date><risdate>2022</risdate><volume>29</volume><issue>Suppl 1</issue><spage>S135</spage><epage>S144</epage><pages>S135-S144</pages><issn>1076-6332</issn><eissn>1878-4046</eissn><abstract>Computer-aided methods have been widely applied to diagnose lesions on breast magnetic resonance imaging (MRI). The first step was to identify abnormal areas. A deep learning Mask Regional Convolutional Neural Network (R-CNN) was implemented to search the entire set of images and detect suspicious lesions.
Two DCE-MRI datasets were used, 241 patients acquired using non–fat-sat sequence for training, and 98 patients acquired using fat-sat sequence for testing. All patients have confirmed unilateral mass cancers. The tumor was segmented using fuzzy c-means clustering algorithm to serve as the ground truth. Mask R-CNN was implemented with ResNet-101 as the backbone. The neural network output the bounding boxes and the segmented tumor for evaluation using the Dice Similarity Coefficient (DSC). The detection performance, and the trade-off between sensitivity and specificity, was analyzed using free response receiver operating characteristic.
When the precontrast and subtraction image of both breasts were used as input, the false positive from the heart and normal parenchymal enhancements could be minimized. The training set had 1469 positive slices (containing lesion) and 9135 negative slices. In 10-fold cross-validation, the mean accuracy = 0.86 and DSC = 0.82. The testing dataset had 1568 positive and 7264 negative slices, with accuracy = 0.75 and DSC = 0.79. When the obtained per-slice results were combined, 240 of 241 (99.5%) lesions in the training and 98 of 98 (100%) lesions in the testing datasets were identified.
Deep learning using Mask R-CNN provided a feasible method to search breast MRI, localize, and segment lesions. This may be integrated with other artificial intelligence algorithms to develop a fully automatic breast MRI diagnostic system.</abstract><cop>United States</cop><pub>Elsevier Inc</pub><pmid>33317911</pmid><doi>10.1016/j.acra.2020.12.001</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1076-6332 |
ispartof | Academic radiology, 2022-01, Vol.29 (Suppl 1), p.S135-S144 |
issn | 1076-6332 1878-4046 |
language | eng |
recordid | cdi_pubmedcentral_primary_oai_pubmedcentral_nih_gov_8192591 |
source | MEDLINE; Elsevier ScienceDirect Journals |
subjects | Artificial Intelligence Breast - diagnostic imaging Breast - pathology Breast MRI Breast Neoplasms - diagnostic imaging Breast Neoplasms - pathology Deep learning Female Fully-automatic detection Humans Image Processing, Computer-Assisted Magnetic Resonance Imaging Mask R-CNN Neural Networks, Computer |
title | Automatic Detection and Segmentation of Breast Cancer on MRI Using Mask R-CNN Trained on Non–Fat-Sat Images and Tested on Fat-Sat Images |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-26T10%3A51%3A25IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_pubme&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Automatic%20Detection%20and%20Segmentation%20of%20Breast%20Cancer%20on%20MRI%20Using%20Mask%20R-CNN%20Trained%20on%20Non%E2%80%93Fat-Sat%20Images%20and%20Tested%20on%20Fat-Sat%20Images&rft.jtitle=Academic%20radiology&rft.au=Zhang,%20Yang&rft.date=2022-01-01&rft.volume=29&rft.issue=Suppl%201&rft.spage=S135&rft.epage=S144&rft.pages=S135-S144&rft.issn=1076-6332&rft.eissn=1878-4046&rft_id=info:doi/10.1016/j.acra.2020.12.001&rft_dat=%3Cproquest_pubme%3E2470282341%3C/proquest_pubme%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2470282341&rft_id=info:pmid/33317911&rft_els_id=S1076633220306760&rfr_iscdi=true |