Hybrid dilation and attention residual U-Net for medical image segmentation
Medical image segmentation is a typical task in medical image processing and critical foundation in medical image analysis. U-Net is well-liked in medical image segmentation, but it doesn't fully explore useful features of the channel and capitalize on the contextual information. Therefore, we...
Gespeichert in:
Veröffentlicht in: | Computers in biology and medicine 2021-07, Vol.134, p.104449-104449, Article 104449 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 104449 |
---|---|
container_issue | |
container_start_page | 104449 |
container_title | Computers in biology and medicine |
container_volume | 134 |
creator | Wang, Zekun Zou, Yanni Liu, Peter X. |
description | Medical image segmentation is a typical task in medical image processing and critical foundation in medical image analysis. U-Net is well-liked in medical image segmentation, but it doesn't fully explore useful features of the channel and capitalize on the contextual information. Therefore, we present an improved U-Net with residual connections, adding a plug-and-play, very portable channel attention (CA) block and a hybrid dilated attention convolutional (HDAC) layer to perform medical image segmentation for different tasks accurately and effectively, and call it HDA-ResUNet, in which we fully utilize advantages of U-Net, attention mechanism and dilated convolution. In contrast to the simple copy splicing of U-Net in the skip connection, the channel attention block is inserted into the extracted feature map of the encoding path before decoding operation. Since this block is lightweight, we can apply it to multiple layers in the backbone network to optimize the channel effect of this layer's coding operation. In addition, the convolutional layer at the bottom of the “U"-shaped network is replaced by a hybrid dilated attention convolutional (HDAC) layer to fuse information from different sizes of receptive fields. The proposed HDA-ResUNet is evaluated on four datasets: liver and tumor segmentation (LiTS 2017), lung segmentation (Lung dataset), nuclear segmentation in microscope images (DSB 2018) and neuron structure segmentation (ISBI 2012). The dice global scores of liver and tumor segmentation (LiTS 2017) reach 0.949 and 0.799. The dice coefficients of lung segmentation and nuclear segmentation are 0.9797 and 0.9081 respectively, and the information theoretic score for the last one is 0.9703. The segmentation results are all more accurate than U-Net with fewer parameters, and the problem of slow convergence speed of U-Net on DBS 2018 is solved.
[Display omitted]
•A medical image segmentation method is proposed based on U-Net.•A novel channel attention technique is introduced to focus more on essential features.•Dilated convolution is used to improve the receptive field to obtain better results.•Experimental results show that our model has fewer parameters and is well segmented compared with U-Net. |
doi_str_mv | 10.1016/j.compbiomed.2021.104449 |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_2528434475</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S0010482521002432</els_id><sourcerecordid>2547632024</sourcerecordid><originalsourceid>FETCH-LOGICAL-c402t-46c55ede6cf50248b6e2f9d8e3c37de6f5e53e1f770b1d85104456662997b3b53</originalsourceid><addsrcrecordid>eNqFkMlOwzAQhi0EomV5BRSJC5eU8ZblCIhNILjA2UrsSeUqiYudIPH2OLQVEhdOlsff-J_5CEkoLCjQ7HK10K5b19Z1aBYMGI1lIUS5R-a0yMsUJBf7ZA5AIRUFkzNyFMIKAARwOCQzzsuSA5Vz8vTwVXtrEmPbarCuT6reJNUwYP9z8xisGas2eU9fcEga55MYaXWs2K5aYhJw2UX2p_eEHDRVG_B0ex6T97vbt5uH9Pn1_vHm6jnVAtiQikxLiQYz3UhgoqgzZE1pCuSa57HcSJQcaZPnUFNTyGk1mWUZK8u85rXkx-Ri8-_au48Rw6A6GzS2bdWjG4NikhWCC5FP6PkfdOVG38fpIiXyjEd5IlLFhtLeheCxUWsf1_NfioKahKuV-hWuJuFqIzy2nm0Dxnp62zXuDEfgegNgNPJp0augLfY6WvSoB2Wc_T_lGybelTE</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2547632024</pqid></control><display><type>article</type><title>Hybrid dilation and attention residual U-Net for medical image segmentation</title><source>Elsevier ScienceDirect Journals Complete</source><creator>Wang, Zekun ; Zou, Yanni ; Liu, Peter X.</creator><creatorcontrib>Wang, Zekun ; Zou, Yanni ; Liu, Peter X.</creatorcontrib><description>Medical image segmentation is a typical task in medical image processing and critical foundation in medical image analysis. U-Net is well-liked in medical image segmentation, but it doesn't fully explore useful features of the channel and capitalize on the contextual information. Therefore, we present an improved U-Net with residual connections, adding a plug-and-play, very portable channel attention (CA) block and a hybrid dilated attention convolutional (HDAC) layer to perform medical image segmentation for different tasks accurately and effectively, and call it HDA-ResUNet, in which we fully utilize advantages of U-Net, attention mechanism and dilated convolution. In contrast to the simple copy splicing of U-Net in the skip connection, the channel attention block is inserted into the extracted feature map of the encoding path before decoding operation. Since this block is lightweight, we can apply it to multiple layers in the backbone network to optimize the channel effect of this layer's coding operation. In addition, the convolutional layer at the bottom of the “U"-shaped network is replaced by a hybrid dilated attention convolutional (HDAC) layer to fuse information from different sizes of receptive fields. The proposed HDA-ResUNet is evaluated on four datasets: liver and tumor segmentation (LiTS 2017), lung segmentation (Lung dataset), nuclear segmentation in microscope images (DSB 2018) and neuron structure segmentation (ISBI 2012). The dice global scores of liver and tumor segmentation (LiTS 2017) reach 0.949 and 0.799. The dice coefficients of lung segmentation and nuclear segmentation are 0.9797 and 0.9081 respectively, and the information theoretic score for the last one is 0.9703. The segmentation results are all more accurate than U-Net with fewer parameters, and the problem of slow convergence speed of U-Net on DBS 2018 is solved.
[Display omitted]
•A medical image segmentation method is proposed based on U-Net.•A novel channel attention technique is introduced to focus more on essential features.•Dilated convolution is used to improve the receptive field to obtain better results.•Experimental results show that our model has fewer parameters and is well segmented compared with U-Net.</description><identifier>ISSN: 0010-4825</identifier><identifier>EISSN: 1879-0534</identifier><identifier>DOI: 10.1016/j.compbiomed.2021.104449</identifier><identifier>PMID: 33993015</identifier><language>eng</language><publisher>United States: Elsevier Ltd</publisher><subject>Channel attention mechanism ; Computer networks ; Convolution ; Convolutional neural network ; Datasets ; Decoding ; Deep learning ; Dilated convolution ; Feature extraction ; Feature maps ; Histone deacetylase ; Image analysis ; Image processing ; Image segmentation ; Information theory ; Liver ; Lungs ; Medical image segmentation ; Medical imaging ; Medical research ; Neural coding ; Neural networks ; Tumors</subject><ispartof>Computers in biology and medicine, 2021-07, Vol.134, p.104449-104449, Article 104449</ispartof><rights>2021 Elsevier Ltd</rights><rights>Copyright © 2021 Elsevier Ltd. All rights reserved.</rights><rights>2021. Elsevier Ltd</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c402t-46c55ede6cf50248b6e2f9d8e3c37de6f5e53e1f770b1d85104456662997b3b53</citedby><cites>FETCH-LOGICAL-c402t-46c55ede6cf50248b6e2f9d8e3c37de6f5e53e1f770b1d85104456662997b3b53</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.sciencedirect.com/science/article/pii/S0010482521002432$$EHTML$$P50$$Gelsevier$$H</linktohtml><link.rule.ids>314,776,780,3537,27901,27902,65534</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/33993015$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Wang, Zekun</creatorcontrib><creatorcontrib>Zou, Yanni</creatorcontrib><creatorcontrib>Liu, Peter X.</creatorcontrib><title>Hybrid dilation and attention residual U-Net for medical image segmentation</title><title>Computers in biology and medicine</title><addtitle>Comput Biol Med</addtitle><description>Medical image segmentation is a typical task in medical image processing and critical foundation in medical image analysis. U-Net is well-liked in medical image segmentation, but it doesn't fully explore useful features of the channel and capitalize on the contextual information. Therefore, we present an improved U-Net with residual connections, adding a plug-and-play, very portable channel attention (CA) block and a hybrid dilated attention convolutional (HDAC) layer to perform medical image segmentation for different tasks accurately and effectively, and call it HDA-ResUNet, in which we fully utilize advantages of U-Net, attention mechanism and dilated convolution. In contrast to the simple copy splicing of U-Net in the skip connection, the channel attention block is inserted into the extracted feature map of the encoding path before decoding operation. Since this block is lightweight, we can apply it to multiple layers in the backbone network to optimize the channel effect of this layer's coding operation. In addition, the convolutional layer at the bottom of the “U"-shaped network is replaced by a hybrid dilated attention convolutional (HDAC) layer to fuse information from different sizes of receptive fields. The proposed HDA-ResUNet is evaluated on four datasets: liver and tumor segmentation (LiTS 2017), lung segmentation (Lung dataset), nuclear segmentation in microscope images (DSB 2018) and neuron structure segmentation (ISBI 2012). The dice global scores of liver and tumor segmentation (LiTS 2017) reach 0.949 and 0.799. The dice coefficients of lung segmentation and nuclear segmentation are 0.9797 and 0.9081 respectively, and the information theoretic score for the last one is 0.9703. The segmentation results are all more accurate than U-Net with fewer parameters, and the problem of slow convergence speed of U-Net on DBS 2018 is solved.
[Display omitted]
•A medical image segmentation method is proposed based on U-Net.•A novel channel attention technique is introduced to focus more on essential features.•Dilated convolution is used to improve the receptive field to obtain better results.•Experimental results show that our model has fewer parameters and is well segmented compared with U-Net.</description><subject>Channel attention mechanism</subject><subject>Computer networks</subject><subject>Convolution</subject><subject>Convolutional neural network</subject><subject>Datasets</subject><subject>Decoding</subject><subject>Deep learning</subject><subject>Dilated convolution</subject><subject>Feature extraction</subject><subject>Feature maps</subject><subject>Histone deacetylase</subject><subject>Image analysis</subject><subject>Image processing</subject><subject>Image segmentation</subject><subject>Information theory</subject><subject>Liver</subject><subject>Lungs</subject><subject>Medical image segmentation</subject><subject>Medical imaging</subject><subject>Medical research</subject><subject>Neural coding</subject><subject>Neural networks</subject><subject>Tumors</subject><issn>0010-4825</issn><issn>1879-0534</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>8G5</sourceid><sourceid>BENPR</sourceid><sourceid>GUQSH</sourceid><sourceid>M2O</sourceid><recordid>eNqFkMlOwzAQhi0EomV5BRSJC5eU8ZblCIhNILjA2UrsSeUqiYudIPH2OLQVEhdOlsff-J_5CEkoLCjQ7HK10K5b19Z1aBYMGI1lIUS5R-a0yMsUJBf7ZA5AIRUFkzNyFMIKAARwOCQzzsuSA5Vz8vTwVXtrEmPbarCuT6reJNUwYP9z8xisGas2eU9fcEga55MYaXWs2K5aYhJw2UX2p_eEHDRVG_B0ex6T97vbt5uH9Pn1_vHm6jnVAtiQikxLiQYz3UhgoqgzZE1pCuSa57HcSJQcaZPnUFNTyGk1mWUZK8u85rXkx-Ri8-_au48Rw6A6GzS2bdWjG4NikhWCC5FP6PkfdOVG38fpIiXyjEd5IlLFhtLeheCxUWsf1_NfioKahKuV-hWuJuFqIzy2nm0Dxnp62zXuDEfgegNgNPJp0augLfY6WvSoB2Wc_T_lGybelTE</recordid><startdate>20210701</startdate><enddate>20210701</enddate><creator>Wang, Zekun</creator><creator>Zou, Yanni</creator><creator>Liu, Peter X.</creator><general>Elsevier Ltd</general><general>Elsevier Limited</general><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7RV</scope><scope>7X7</scope><scope>7XB</scope><scope>88E</scope><scope>8AL</scope><scope>8AO</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FH</scope><scope>8FI</scope><scope>8FJ</scope><scope>8FK</scope><scope>8G5</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BBNVY</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>BHPHI</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FR3</scope><scope>FYUFA</scope><scope>GHDGH</scope><scope>GNUQQ</scope><scope>GUQSH</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K7-</scope><scope>K9.</scope><scope>KB0</scope><scope>LK8</scope><scope>M0N</scope><scope>M0S</scope><scope>M1P</scope><scope>M2O</scope><scope>M7P</scope><scope>M7Z</scope><scope>MBDVC</scope><scope>NAPCQ</scope><scope>P5Z</scope><scope>P62</scope><scope>P64</scope><scope>PHGZM</scope><scope>PHGZT</scope><scope>PJZUB</scope><scope>PKEHL</scope><scope>PPXIY</scope><scope>PQEST</scope><scope>PQGLB</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>Q9U</scope><scope>7X8</scope></search><sort><creationdate>20210701</creationdate><title>Hybrid dilation and attention residual U-Net for medical image segmentation</title><author>Wang, Zekun ; Zou, Yanni ; Liu, Peter X.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c402t-46c55ede6cf50248b6e2f9d8e3c37de6f5e53e1f770b1d85104456662997b3b53</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Channel attention mechanism</topic><topic>Computer networks</topic><topic>Convolution</topic><topic>Convolutional neural network</topic><topic>Datasets</topic><topic>Decoding</topic><topic>Deep learning</topic><topic>Dilated convolution</topic><topic>Feature extraction</topic><topic>Feature maps</topic><topic>Histone deacetylase</topic><topic>Image analysis</topic><topic>Image processing</topic><topic>Image segmentation</topic><topic>Information theory</topic><topic>Liver</topic><topic>Lungs</topic><topic>Medical image segmentation</topic><topic>Medical imaging</topic><topic>Medical research</topic><topic>Neural coding</topic><topic>Neural networks</topic><topic>Tumors</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Wang, Zekun</creatorcontrib><creatorcontrib>Zou, Yanni</creatorcontrib><creatorcontrib>Liu, Peter X.</creatorcontrib><collection>PubMed</collection><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>Nursing & Allied Health Database</collection><collection>Health & Medical Collection</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>Medical Database (Alumni Edition)</collection><collection>Computing Database (Alumni Edition)</collection><collection>ProQuest Pharma Collection</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Natural Science Collection</collection><collection>Hospital Premium Collection</collection><collection>Hospital Premium Collection (Alumni Edition)</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>Research Library (Alumni Edition)</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies & Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>Biological Science Collection</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>Natural Science Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>Engineering Research Database</collection><collection>Health Research Premium Collection</collection><collection>Health Research Premium Collection (Alumni)</collection><collection>ProQuest Central Student</collection><collection>Research Library Prep</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>Computer Science Database</collection><collection>ProQuest Health & Medical Complete (Alumni)</collection><collection>Nursing & Allied Health Database (Alumni Edition)</collection><collection>ProQuest Biological Science Collection</collection><collection>Computing Database</collection><collection>Health & Medical Collection (Alumni Edition)</collection><collection>Medical Database</collection><collection>Research Library</collection><collection>Biological Science Database</collection><collection>Biochemistry Abstracts 1</collection><collection>Research Library (Corporate)</collection><collection>Nursing & Allied Health Premium</collection><collection>Advanced Technologies & Aerospace Database</collection><collection>ProQuest Advanced Technologies & Aerospace Collection</collection><collection>Biotechnology and BioEngineering Abstracts</collection><collection>ProQuest Central (New)</collection><collection>ProQuest One Academic (New)</collection><collection>ProQuest Health & Medical Research Collection</collection><collection>ProQuest One Academic Middle East (New)</collection><collection>ProQuest One Health & Nursing</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Applied & Life Sciences</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>ProQuest Central Basic</collection><collection>MEDLINE - Academic</collection><jtitle>Computers in biology and medicine</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Wang, Zekun</au><au>Zou, Yanni</au><au>Liu, Peter X.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Hybrid dilation and attention residual U-Net for medical image segmentation</atitle><jtitle>Computers in biology and medicine</jtitle><addtitle>Comput Biol Med</addtitle><date>2021-07-01</date><risdate>2021</risdate><volume>134</volume><spage>104449</spage><epage>104449</epage><pages>104449-104449</pages><artnum>104449</artnum><issn>0010-4825</issn><eissn>1879-0534</eissn><abstract>Medical image segmentation is a typical task in medical image processing and critical foundation in medical image analysis. U-Net is well-liked in medical image segmentation, but it doesn't fully explore useful features of the channel and capitalize on the contextual information. Therefore, we present an improved U-Net with residual connections, adding a plug-and-play, very portable channel attention (CA) block and a hybrid dilated attention convolutional (HDAC) layer to perform medical image segmentation for different tasks accurately and effectively, and call it HDA-ResUNet, in which we fully utilize advantages of U-Net, attention mechanism and dilated convolution. In contrast to the simple copy splicing of U-Net in the skip connection, the channel attention block is inserted into the extracted feature map of the encoding path before decoding operation. Since this block is lightweight, we can apply it to multiple layers in the backbone network to optimize the channel effect of this layer's coding operation. In addition, the convolutional layer at the bottom of the “U"-shaped network is replaced by a hybrid dilated attention convolutional (HDAC) layer to fuse information from different sizes of receptive fields. The proposed HDA-ResUNet is evaluated on four datasets: liver and tumor segmentation (LiTS 2017), lung segmentation (Lung dataset), nuclear segmentation in microscope images (DSB 2018) and neuron structure segmentation (ISBI 2012). The dice global scores of liver and tumor segmentation (LiTS 2017) reach 0.949 and 0.799. The dice coefficients of lung segmentation and nuclear segmentation are 0.9797 and 0.9081 respectively, and the information theoretic score for the last one is 0.9703. The segmentation results are all more accurate than U-Net with fewer parameters, and the problem of slow convergence speed of U-Net on DBS 2018 is solved.
[Display omitted]
•A medical image segmentation method is proposed based on U-Net.•A novel channel attention technique is introduced to focus more on essential features.•Dilated convolution is used to improve the receptive field to obtain better results.•Experimental results show that our model has fewer parameters and is well segmented compared with U-Net.</abstract><cop>United States</cop><pub>Elsevier Ltd</pub><pmid>33993015</pmid><doi>10.1016/j.compbiomed.2021.104449</doi><tpages>1</tpages></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0010-4825 |
ispartof | Computers in biology and medicine, 2021-07, Vol.134, p.104449-104449, Article 104449 |
issn | 0010-4825 1879-0534 |
language | eng |
recordid | cdi_proquest_miscellaneous_2528434475 |
source | Elsevier ScienceDirect Journals Complete |
subjects | Channel attention mechanism Computer networks Convolution Convolutional neural network Datasets Decoding Deep learning Dilated convolution Feature extraction Feature maps Histone deacetylase Image analysis Image processing Image segmentation Information theory Liver Lungs Medical image segmentation Medical imaging Medical research Neural coding Neural networks Tumors |
title | Hybrid dilation and attention residual U-Net for medical image segmentation |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-21T17%3A25%3A54IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Hybrid%20dilation%20and%20attention%20residual%20U-Net%20for%20medical%20image%20segmentation&rft.jtitle=Computers%20in%20biology%20and%20medicine&rft.au=Wang,%20Zekun&rft.date=2021-07-01&rft.volume=134&rft.spage=104449&rft.epage=104449&rft.pages=104449-104449&rft.artnum=104449&rft.issn=0010-4825&rft.eissn=1879-0534&rft_id=info:doi/10.1016/j.compbiomed.2021.104449&rft_dat=%3Cproquest_cross%3E2547632024%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2547632024&rft_id=info:pmid/33993015&rft_els_id=S0010482521002432&rfr_iscdi=true |