Domain Knowledge Powered Deep Learning for Breast Cancer Diagnosis Based on Contrast-Enhanced Ultrasound Videos
In recent years, deep learning has been widely used in breast cancer diagnosis, and many high-performance models have emerged. However, most of the existing deep learning models are mainly based on static breast ultrasound (US) images. In actual diagnostic process, contrast-enhanced ultrasound (CEUS...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on medical imaging 2021-09, Vol.40 (9), p.2439-2451 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 2451 |
---|---|
container_issue | 9 |
container_start_page | 2439 |
container_title | IEEE transactions on medical imaging |
container_volume | 40 |
creator | Chen, Chen Wang, Yong Niu, Jianwei Liu, Xuefeng Li, Qingfeng Gong, Xuantong |
description | In recent years, deep learning has been widely used in breast cancer diagnosis, and many high-performance models have emerged. However, most of the existing deep learning models are mainly based on static breast ultrasound (US) images. In actual diagnostic process, contrast-enhanced ultrasound (CEUS) is a commonly used technique by radiologists. Compared with static breast US images, CEUS videos can provide more detailed blood supply information of tumors, and therefore can help radiologists make a more accurate diagnosis. In this paper, we propose a novel diagnosis model based on CEUS videos. The backbone of the model is a 3D convolutional neural network. More specifically, we notice that radiologists generally follow two specific patterns when browsing CEUS videos. One pattern is that they focus on specific time slots, and the other is that they pay attention to the differences between the CEUS frames and the corresponding US images. To incorporate these two patterns into our deep learning model, we design a domain-knowledge-guided temporal attention module and a channel attention module. We validate our model on our Breast-CEUS dataset composed of 221 cases. The result shows that our model can achieve a sensitivity of 97.2% and an accuracy of 86.3%. In particular, the incorporation of domain knowledge leads to a 3.5% improvement in sensitivity and a 6.0% improvement in specificity. Finally, we also prove the validity of two domain knowledge modules in the 3D convolutional neural network (C3D) and the 3D ResNet (R3D). |
doi_str_mv | 10.1109/TMI.2021.3078370 |
format | Article |
fullrecord | <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_proquest_journals_2568064002</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9425559</ieee_id><sourcerecordid>2568064002</sourcerecordid><originalsourceid>FETCH-LOGICAL-c394t-3f2537c98ec8bdf09b44532832d795cd4fa4433bf4c68493f0d7823201f3dfca3</originalsourceid><addsrcrecordid>eNpd0U1rGzEQBmBRWhon7b1QKIJccll3Vh-70rGx80Ud2kNSeltkaeRuWEuO5CXk30fGTg49CUbPzAi9hHypYVrXoL_f3d5MGbB6yqFVvIV3ZFJLqSomxd_3ZAKsVRVAw47Icc4PALWQoD-SI851UyCbkDiPa9MH-jPEpwHdCunv-IQJHZ0jbugCTQp9WFEfEz1PaPKWzkywmOi8N6sQc5_pucnFx0BnMWxTIdVF-LdDjt4Pu0Icg6N_eocxfyIfvBkyfj6cJ-T-8uJudl0tfl3dzH4sKsu12FbcM8lbqxVatXQe9FIIyZnizLVaWie8EYLzpRe2UUJzD65VjDOoPXfeGn5CzvZzNyk-jpi33brPFofBBIxj7phkgjdliS709D_6EMcUyuuKahQ0AoAVBXtlU8w5oe82qV-b9NzV0O3C6EoY3S6M7hBGafl2GDwu1-jeGl5_v4Cve9Aj4tu1FkxKqfkLFUyM6w</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2568064002</pqid></control><display><type>article</type><title>Domain Knowledge Powered Deep Learning for Breast Cancer Diagnosis Based on Contrast-Enhanced Ultrasound Videos</title><source>IEEE Electronic Library (IEL)</source><creator>Chen, Chen ; Wang, Yong ; Niu, Jianwei ; Liu, Xuefeng ; Li, Qingfeng ; Gong, Xuantong</creator><creatorcontrib>Chen, Chen ; Wang, Yong ; Niu, Jianwei ; Liu, Xuefeng ; Li, Qingfeng ; Gong, Xuantong</creatorcontrib><description>In recent years, deep learning has been widely used in breast cancer diagnosis, and many high-performance models have emerged. However, most of the existing deep learning models are mainly based on static breast ultrasound (US) images. In actual diagnostic process, contrast-enhanced ultrasound (CEUS) is a commonly used technique by radiologists. Compared with static breast US images, CEUS videos can provide more detailed blood supply information of tumors, and therefore can help radiologists make a more accurate diagnosis. In this paper, we propose a novel diagnosis model based on CEUS videos. The backbone of the model is a 3D convolutional neural network. More specifically, we notice that radiologists generally follow two specific patterns when browsing CEUS videos. One pattern is that they focus on specific time slots, and the other is that they pay attention to the differences between the CEUS frames and the corresponding US images. To incorporate these two patterns into our deep learning model, we design a domain-knowledge-guided temporal attention module and a channel attention module. We validate our model on our Breast-CEUS dataset composed of 221 cases. The result shows that our model can achieve a sensitivity of 97.2% and an accuracy of 86.3%. In particular, the incorporation of domain knowledge leads to a 3.5% improvement in sensitivity and a 6.0% improvement in specificity. Finally, we also prove the validity of two domain knowledge modules in the 3D convolutional neural network (C3D) and the 3D ResNet (R3D).</description><identifier>ISSN: 0278-0062</identifier><identifier>EISSN: 1558-254X</identifier><identifier>DOI: 10.1109/TMI.2021.3078370</identifier><identifier>PMID: 33961552</identifier><identifier>CODEN: ITMID4</identifier><language>eng</language><publisher>United States: IEEE</publisher><subject>3D convolution ; Artificial neural networks ; attention mechanism ; Breast cancer ; Brightness ; Browsing ; contrast-enhanced ultrasound ; Deep learning ; Diagnosis ; domain knowledge ; Domains ; Feature extraction ; Image contrast ; Image enhancement ; Medical diagnosis ; Medical imaging ; Modules ; Neural networks ; Sensitivity ; Solid modeling ; Three dimensional models ; Tumors ; Ultrasonic imaging ; Ultrasound ; Video ; Videos</subject><ispartof>IEEE transactions on medical imaging, 2021-09, Vol.40 (9), p.2439-2451</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2021</rights><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c394t-3f2537c98ec8bdf09b44532832d795cd4fa4433bf4c68493f0d7823201f3dfca3</citedby><cites>FETCH-LOGICAL-c394t-3f2537c98ec8bdf09b44532832d795cd4fa4433bf4c68493f0d7823201f3dfca3</cites><orcidid>0000-0003-3946-5107 ; 0000-0003-2705-8731 ; 0000-0001-8984-5461</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9425559$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,27924,27925,54758</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9425559$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/33961552$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Chen, Chen</creatorcontrib><creatorcontrib>Wang, Yong</creatorcontrib><creatorcontrib>Niu, Jianwei</creatorcontrib><creatorcontrib>Liu, Xuefeng</creatorcontrib><creatorcontrib>Li, Qingfeng</creatorcontrib><creatorcontrib>Gong, Xuantong</creatorcontrib><title>Domain Knowledge Powered Deep Learning for Breast Cancer Diagnosis Based on Contrast-Enhanced Ultrasound Videos</title><title>IEEE transactions on medical imaging</title><addtitle>TMI</addtitle><addtitle>IEEE Trans Med Imaging</addtitle><description>In recent years, deep learning has been widely used in breast cancer diagnosis, and many high-performance models have emerged. However, most of the existing deep learning models are mainly based on static breast ultrasound (US) images. In actual diagnostic process, contrast-enhanced ultrasound (CEUS) is a commonly used technique by radiologists. Compared with static breast US images, CEUS videos can provide more detailed blood supply information of tumors, and therefore can help radiologists make a more accurate diagnosis. In this paper, we propose a novel diagnosis model based on CEUS videos. The backbone of the model is a 3D convolutional neural network. More specifically, we notice that radiologists generally follow two specific patterns when browsing CEUS videos. One pattern is that they focus on specific time slots, and the other is that they pay attention to the differences between the CEUS frames and the corresponding US images. To incorporate these two patterns into our deep learning model, we design a domain-knowledge-guided temporal attention module and a channel attention module. We validate our model on our Breast-CEUS dataset composed of 221 cases. The result shows that our model can achieve a sensitivity of 97.2% and an accuracy of 86.3%. In particular, the incorporation of domain knowledge leads to a 3.5% improvement in sensitivity and a 6.0% improvement in specificity. Finally, we also prove the validity of two domain knowledge modules in the 3D convolutional neural network (C3D) and the 3D ResNet (R3D).</description><subject>3D convolution</subject><subject>Artificial neural networks</subject><subject>attention mechanism</subject><subject>Breast cancer</subject><subject>Brightness</subject><subject>Browsing</subject><subject>contrast-enhanced ultrasound</subject><subject>Deep learning</subject><subject>Diagnosis</subject><subject>domain knowledge</subject><subject>Domains</subject><subject>Feature extraction</subject><subject>Image contrast</subject><subject>Image enhancement</subject><subject>Medical diagnosis</subject><subject>Medical imaging</subject><subject>Modules</subject><subject>Neural networks</subject><subject>Sensitivity</subject><subject>Solid modeling</subject><subject>Three dimensional models</subject><subject>Tumors</subject><subject>Ultrasonic imaging</subject><subject>Ultrasound</subject><subject>Video</subject><subject>Videos</subject><issn>0278-0062</issn><issn>1558-254X</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpd0U1rGzEQBmBRWhon7b1QKIJccll3Vh-70rGx80Ud2kNSeltkaeRuWEuO5CXk30fGTg49CUbPzAi9hHypYVrXoL_f3d5MGbB6yqFVvIV3ZFJLqSomxd_3ZAKsVRVAw47Icc4PALWQoD-SI851UyCbkDiPa9MH-jPEpwHdCunv-IQJHZ0jbugCTQp9WFEfEz1PaPKWzkywmOi8N6sQc5_pucnFx0BnMWxTIdVF-LdDjt4Pu0Icg6N_eocxfyIfvBkyfj6cJ-T-8uJudl0tfl3dzH4sKsu12FbcM8lbqxVatXQe9FIIyZnizLVaWie8EYLzpRe2UUJzD65VjDOoPXfeGn5CzvZzNyk-jpi33brPFofBBIxj7phkgjdliS709D_6EMcUyuuKahQ0AoAVBXtlU8w5oe82qV-b9NzV0O3C6EoY3S6M7hBGafl2GDwu1-jeGl5_v4Cve9Aj4tu1FkxKqfkLFUyM6w</recordid><startdate>20210901</startdate><enddate>20210901</enddate><creator>Chen, Chen</creator><creator>Wang, Yong</creator><creator>Niu, Jianwei</creator><creator>Liu, Xuefeng</creator><creator>Li, Qingfeng</creator><creator>Gong, Xuantong</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7QF</scope><scope>7QO</scope><scope>7QQ</scope><scope>7SC</scope><scope>7SE</scope><scope>7SP</scope><scope>7SR</scope><scope>7TA</scope><scope>7TB</scope><scope>7U5</scope><scope>8BQ</scope><scope>8FD</scope><scope>F28</scope><scope>FR3</scope><scope>H8D</scope><scope>JG9</scope><scope>JQ2</scope><scope>KR7</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>NAPCQ</scope><scope>P64</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0003-3946-5107</orcidid><orcidid>https://orcid.org/0000-0003-2705-8731</orcidid><orcidid>https://orcid.org/0000-0001-8984-5461</orcidid></search><sort><creationdate>20210901</creationdate><title>Domain Knowledge Powered Deep Learning for Breast Cancer Diagnosis Based on Contrast-Enhanced Ultrasound Videos</title><author>Chen, Chen ; Wang, Yong ; Niu, Jianwei ; Liu, Xuefeng ; Li, Qingfeng ; Gong, Xuantong</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c394t-3f2537c98ec8bdf09b44532832d795cd4fa4433bf4c68493f0d7823201f3dfca3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>3D convolution</topic><topic>Artificial neural networks</topic><topic>attention mechanism</topic><topic>Breast cancer</topic><topic>Brightness</topic><topic>Browsing</topic><topic>contrast-enhanced ultrasound</topic><topic>Deep learning</topic><topic>Diagnosis</topic><topic>domain knowledge</topic><topic>Domains</topic><topic>Feature extraction</topic><topic>Image contrast</topic><topic>Image enhancement</topic><topic>Medical diagnosis</topic><topic>Medical imaging</topic><topic>Modules</topic><topic>Neural networks</topic><topic>Sensitivity</topic><topic>Solid modeling</topic><topic>Three dimensional models</topic><topic>Tumors</topic><topic>Ultrasonic imaging</topic><topic>Ultrasound</topic><topic>Video</topic><topic>Videos</topic><toplevel>online_resources</toplevel><creatorcontrib>Chen, Chen</creatorcontrib><creatorcontrib>Wang, Yong</creatorcontrib><creatorcontrib>Niu, Jianwei</creatorcontrib><creatorcontrib>Liu, Xuefeng</creatorcontrib><creatorcontrib>Li, Qingfeng</creatorcontrib><creatorcontrib>Gong, Xuantong</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>Aluminium Industry Abstracts</collection><collection>Biotechnology Research Abstracts</collection><collection>Ceramic Abstracts</collection><collection>Computer and Information Systems Abstracts</collection><collection>Corrosion Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Engineered Materials Abstracts</collection><collection>Materials Business File</collection><collection>Mechanical & Transportation Engineering Abstracts</collection><collection>Solid State and Superconductivity Abstracts</collection><collection>METADEX</collection><collection>Technology Research Database</collection><collection>ANTE: Abstracts in New Technology & Engineering</collection><collection>Engineering Research Database</collection><collection>Aerospace Database</collection><collection>Materials Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Civil Engineering Abstracts</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>Nursing & Allied Health Premium</collection><collection>Biotechnology and BioEngineering Abstracts</collection><collection>MEDLINE - Academic</collection><jtitle>IEEE transactions on medical imaging</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Chen, Chen</au><au>Wang, Yong</au><au>Niu, Jianwei</au><au>Liu, Xuefeng</au><au>Li, Qingfeng</au><au>Gong, Xuantong</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Domain Knowledge Powered Deep Learning for Breast Cancer Diagnosis Based on Contrast-Enhanced Ultrasound Videos</atitle><jtitle>IEEE transactions on medical imaging</jtitle><stitle>TMI</stitle><addtitle>IEEE Trans Med Imaging</addtitle><date>2021-09-01</date><risdate>2021</risdate><volume>40</volume><issue>9</issue><spage>2439</spage><epage>2451</epage><pages>2439-2451</pages><issn>0278-0062</issn><eissn>1558-254X</eissn><coden>ITMID4</coden><abstract>In recent years, deep learning has been widely used in breast cancer diagnosis, and many high-performance models have emerged. However, most of the existing deep learning models are mainly based on static breast ultrasound (US) images. In actual diagnostic process, contrast-enhanced ultrasound (CEUS) is a commonly used technique by radiologists. Compared with static breast US images, CEUS videos can provide more detailed blood supply information of tumors, and therefore can help radiologists make a more accurate diagnosis. In this paper, we propose a novel diagnosis model based on CEUS videos. The backbone of the model is a 3D convolutional neural network. More specifically, we notice that radiologists generally follow two specific patterns when browsing CEUS videos. One pattern is that they focus on specific time slots, and the other is that they pay attention to the differences between the CEUS frames and the corresponding US images. To incorporate these two patterns into our deep learning model, we design a domain-knowledge-guided temporal attention module and a channel attention module. We validate our model on our Breast-CEUS dataset composed of 221 cases. The result shows that our model can achieve a sensitivity of 97.2% and an accuracy of 86.3%. In particular, the incorporation of domain knowledge leads to a 3.5% improvement in sensitivity and a 6.0% improvement in specificity. Finally, we also prove the validity of two domain knowledge modules in the 3D convolutional neural network (C3D) and the 3D ResNet (R3D).</abstract><cop>United States</cop><pub>IEEE</pub><pmid>33961552</pmid><doi>10.1109/TMI.2021.3078370</doi><tpages>13</tpages><orcidid>https://orcid.org/0000-0003-3946-5107</orcidid><orcidid>https://orcid.org/0000-0003-2705-8731</orcidid><orcidid>https://orcid.org/0000-0001-8984-5461</orcidid></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 0278-0062 |
ispartof | IEEE transactions on medical imaging, 2021-09, Vol.40 (9), p.2439-2451 |
issn | 0278-0062 1558-254X |
language | eng |
recordid | cdi_proquest_journals_2568064002 |
source | IEEE Electronic Library (IEL) |
subjects | 3D convolution Artificial neural networks attention mechanism Breast cancer Brightness Browsing contrast-enhanced ultrasound Deep learning Diagnosis domain knowledge Domains Feature extraction Image contrast Image enhancement Medical diagnosis Medical imaging Modules Neural networks Sensitivity Solid modeling Three dimensional models Tumors Ultrasonic imaging Ultrasound Video Videos |
title | Domain Knowledge Powered Deep Learning for Breast Cancer Diagnosis Based on Contrast-Enhanced Ultrasound Videos |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-26T07%3A48%3A44IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Domain%20Knowledge%20Powered%20Deep%20Learning%20for%20Breast%20Cancer%20Diagnosis%20Based%20on%20Contrast-Enhanced%20Ultrasound%20Videos&rft.jtitle=IEEE%20transactions%20on%20medical%20imaging&rft.au=Chen,%20Chen&rft.date=2021-09-01&rft.volume=40&rft.issue=9&rft.spage=2439&rft.epage=2451&rft.pages=2439-2451&rft.issn=0278-0062&rft.eissn=1558-254X&rft.coden=ITMID4&rft_id=info:doi/10.1109/TMI.2021.3078370&rft_dat=%3Cproquest_RIE%3E2568064002%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2568064002&rft_id=info:pmid/33961552&rft_ieee_id=9425559&rfr_iscdi=true |