Global Transformer and Dual Local Attention Network via Deep-Shallow Hierarchical Feature Fusion for Retinal Vessel Segmentation

Clinically, retinal vessel segmentation is a significant step in the diagnosis of fundus diseases. However, recent methods generally neglect the difference of semantic information between deep and shallow features, which fail to capture the global and local characterizations in fundus images simulta...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on cybernetics 2023-09, Vol.53 (9), p.5826-5839
Hauptverfasser: Li, Yang, Zhang, Yue, Liu, Jing-Yu, Wang, Kang, Zhang, Kai, Zhang, Gen-Sheng, Liao, Xiao-Feng, Yang, Guang
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 5839
container_issue 9
container_start_page 5826
container_title IEEE transactions on cybernetics
container_volume 53
creator Li, Yang
Zhang, Yue
Liu, Jing-Yu
Wang, Kang
Zhang, Kai
Zhang, Gen-Sheng
Liao, Xiao-Feng
Yang, Guang
description Clinically, retinal vessel segmentation is a significant step in the diagnosis of fundus diseases. However, recent methods generally neglect the difference of semantic information between deep and shallow features, which fail to capture the global and local characterizations in fundus images simultaneously, resulting in the limited segmentation performance for fine vessels. In this article, a global transformer (GT) and dual local attention (DLA) network via deep-shallow hierarchical feature fusion (GT-DLA-dsHFF) are investigated to solve the above limitations. First, the GT is developed to integrate the global information in the retinal image, which effectively captures the long-distance dependence between pixels, alleviating the discontinuity of blood vessels in the segmentation results. Second, DLA, which is constructed using dilated convolutions with varied dilation rates, unsupervised edge detection, and squeeze-excitation block, is proposed to extract local vessel information, consolidating the edge details in the segmentation result. Finally, a novel deep-shallow hierarchical feature fusion (dsHFF) algorithm is studied to fuse the features in different scales in the deep learning framework, respectively, which can mitigate the attenuation of valid information in the process of feature fusion. We verified the GT-DLA-dsHFF on four typical fundus image datasets. The experimental results demonstrate our GT-DLA-dsHFF achieves superior performance against the current methods and detailed discussions verify the efficacy of the proposed three modules. Segmentation results of diseased images show the robustness of our proposed GT-DLA-dsHFF. Implementation codes will be available on https://github.com/YangLibuaa/GT-DLA-dsHFF .
doi_str_mv 10.1109/TCYB.2022.3194099
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_pubmed_primary_35984806</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9863763</ieee_id><sourcerecordid>2853025552</sourcerecordid><originalsourceid>FETCH-LOGICAL-c392t-69f2a16ac0a206ad5fd53e6ee055f156cfb0806e9f9c3a9a0778a70ac3500db13</originalsourceid><addsrcrecordid>eNpdkU9P3DAQxa2qqCDKB0CVKku9cMniP2vHPtKFBaQVSLAgcYpmnUkJTeLFTop660fH0S57qA9ja-b3njx6hBxzNuGc2dPl7OnnRDAhJpLbKbP2EzkQXJtMiFx93r11vk-OYnxh6ZjUsuYL2ZfKmqlh-oD8u2z8Chq6DNDFyocWA4WupOdDai68S_Ws77Hra9_RG-zffPhN_9RAzxHX2f0zNI1_o1c1BgjuuR75OUI_BKTzIY6iZErvsK-7NHrEGLGh9_irTZYwmn4lexU0EY-29yF5mF8sZ1fZ4vbyena2yJy0os-0rQRwDY6BYBpKVZVKokZkSlVcaVetWFoIbWWdBAsszw3kDJxUjJUrLg_JycZ3HfzrgLEv2jo6bBro0A-xEDmbGm0lFwn98R_64oeQ_p8ooyQTSqmR4hvKBR9jwKpYh7qF8LfgrBgTKsaEijGhYptQ0nzfOg-rFsud4iOPBHzbADUi7sbWaJlrKd8BrFiVWg</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2853025552</pqid></control><display><type>article</type><title>Global Transformer and Dual Local Attention Network via Deep-Shallow Hierarchical Feature Fusion for Retinal Vessel Segmentation</title><source>IEEE/IET Electronic Library</source><creator>Li, Yang ; Zhang, Yue ; Liu, Jing-Yu ; Wang, Kang ; Zhang, Kai ; Zhang, Gen-Sheng ; Liao, Xiao-Feng ; Yang, Guang</creator><creatorcontrib>Li, Yang ; Zhang, Yue ; Liu, Jing-Yu ; Wang, Kang ; Zhang, Kai ; Zhang, Gen-Sheng ; Liao, Xiao-Feng ; Yang, Guang</creatorcontrib><description>Clinically, retinal vessel segmentation is a significant step in the diagnosis of fundus diseases. However, recent methods generally neglect the difference of semantic information between deep and shallow features, which fail to capture the global and local characterizations in fundus images simultaneously, resulting in the limited segmentation performance for fine vessels. In this article, a global transformer (GT) and dual local attention (DLA) network via deep-shallow hierarchical feature fusion (GT-DLA-dsHFF) are investigated to solve the above limitations. First, the GT is developed to integrate the global information in the retinal image, which effectively captures the long-distance dependence between pixels, alleviating the discontinuity of blood vessels in the segmentation results. Second, DLA, which is constructed using dilated convolutions with varied dilation rates, unsupervised edge detection, and squeeze-excitation block, is proposed to extract local vessel information, consolidating the edge details in the segmentation result. Finally, a novel deep-shallow hierarchical feature fusion (dsHFF) algorithm is studied to fuse the features in different scales in the deep learning framework, respectively, which can mitigate the attenuation of valid information in the process of feature fusion. We verified the GT-DLA-dsHFF on four typical fundus image datasets. The experimental results demonstrate our GT-DLA-dsHFF achieves superior performance against the current methods and detailed discussions verify the efficacy of the proposed three modules. Segmentation results of diseased images show the robustness of our proposed GT-DLA-dsHFF. Implementation codes will be available on https://github.com/YangLibuaa/GT-DLA-dsHFF .</description><identifier>ISSN: 2168-2267</identifier><identifier>EISSN: 2168-2275</identifier><identifier>DOI: 10.1109/TCYB.2022.3194099</identifier><identifier>PMID: 35984806</identifier><identifier>CODEN: ITCEB8</identifier><language>eng</language><publisher>United States: IEEE</publisher><subject>Algorithms ; Blood vessels ; Decoding ; Deep-shallow hierarchical feature fusion (dsHFF) ; dual local attention (DLA) ; Edge detection ; Feature extraction ; global transformer (GT) ; Image edge detection ; Image segmentation ; medical image analysis ; Medical imaging ; Retinal images ; retinal vessel segmentation ; Retinal vessels ; Transformers</subject><ispartof>IEEE transactions on cybernetics, 2023-09, Vol.53 (9), p.5826-5839</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c392t-69f2a16ac0a206ad5fd53e6ee055f156cfb0806e9f9c3a9a0778a70ac3500db13</citedby><cites>FETCH-LOGICAL-c392t-69f2a16ac0a206ad5fd53e6ee055f156cfb0806e9f9c3a9a0778a70ac3500db13</cites><orcidid>0000-0001-7344-7733 ; 0000-0002-1646-637X ; 0000-0002-1751-1742 ; 0000-0002-3566-8161 ; 0000-0001-6733-5014</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9863763$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,776,780,792,27901,27902,54733</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9863763$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/35984806$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Li, Yang</creatorcontrib><creatorcontrib>Zhang, Yue</creatorcontrib><creatorcontrib>Liu, Jing-Yu</creatorcontrib><creatorcontrib>Wang, Kang</creatorcontrib><creatorcontrib>Zhang, Kai</creatorcontrib><creatorcontrib>Zhang, Gen-Sheng</creatorcontrib><creatorcontrib>Liao, Xiao-Feng</creatorcontrib><creatorcontrib>Yang, Guang</creatorcontrib><title>Global Transformer and Dual Local Attention Network via Deep-Shallow Hierarchical Feature Fusion for Retinal Vessel Segmentation</title><title>IEEE transactions on cybernetics</title><addtitle>TCYB</addtitle><addtitle>IEEE Trans Cybern</addtitle><description>Clinically, retinal vessel segmentation is a significant step in the diagnosis of fundus diseases. However, recent methods generally neglect the difference of semantic information between deep and shallow features, which fail to capture the global and local characterizations in fundus images simultaneously, resulting in the limited segmentation performance for fine vessels. In this article, a global transformer (GT) and dual local attention (DLA) network via deep-shallow hierarchical feature fusion (GT-DLA-dsHFF) are investigated to solve the above limitations. First, the GT is developed to integrate the global information in the retinal image, which effectively captures the long-distance dependence between pixels, alleviating the discontinuity of blood vessels in the segmentation results. Second, DLA, which is constructed using dilated convolutions with varied dilation rates, unsupervised edge detection, and squeeze-excitation block, is proposed to extract local vessel information, consolidating the edge details in the segmentation result. Finally, a novel deep-shallow hierarchical feature fusion (dsHFF) algorithm is studied to fuse the features in different scales in the deep learning framework, respectively, which can mitigate the attenuation of valid information in the process of feature fusion. We verified the GT-DLA-dsHFF on four typical fundus image datasets. The experimental results demonstrate our GT-DLA-dsHFF achieves superior performance against the current methods and detailed discussions verify the efficacy of the proposed three modules. Segmentation results of diseased images show the robustness of our proposed GT-DLA-dsHFF. Implementation codes will be available on https://github.com/YangLibuaa/GT-DLA-dsHFF .</description><subject>Algorithms</subject><subject>Blood vessels</subject><subject>Decoding</subject><subject>Deep-shallow hierarchical feature fusion (dsHFF)</subject><subject>dual local attention (DLA)</subject><subject>Edge detection</subject><subject>Feature extraction</subject><subject>global transformer (GT)</subject><subject>Image edge detection</subject><subject>Image segmentation</subject><subject>medical image analysis</subject><subject>Medical imaging</subject><subject>Retinal images</subject><subject>retinal vessel segmentation</subject><subject>Retinal vessels</subject><subject>Transformers</subject><issn>2168-2267</issn><issn>2168-2275</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpdkU9P3DAQxa2qqCDKB0CVKku9cMniP2vHPtKFBaQVSLAgcYpmnUkJTeLFTop660fH0S57qA9ja-b3njx6hBxzNuGc2dPl7OnnRDAhJpLbKbP2EzkQXJtMiFx93r11vk-OYnxh6ZjUsuYL2ZfKmqlh-oD8u2z8Chq6DNDFyocWA4WupOdDai68S_Ws77Hra9_RG-zffPhN_9RAzxHX2f0zNI1_o1c1BgjuuR75OUI_BKTzIY6iZErvsK-7NHrEGLGh9_irTZYwmn4lexU0EY-29yF5mF8sZ1fZ4vbyena2yJy0os-0rQRwDY6BYBpKVZVKokZkSlVcaVetWFoIbWWdBAsszw3kDJxUjJUrLg_JycZ3HfzrgLEv2jo6bBro0A-xEDmbGm0lFwn98R_64oeQ_p8ooyQTSqmR4hvKBR9jwKpYh7qF8LfgrBgTKsaEijGhYptQ0nzfOg-rFsud4iOPBHzbADUi7sbWaJlrKd8BrFiVWg</recordid><startdate>20230901</startdate><enddate>20230901</enddate><creator>Li, Yang</creator><creator>Zhang, Yue</creator><creator>Liu, Jing-Yu</creator><creator>Wang, Kang</creator><creator>Zhang, Kai</creator><creator>Zhang, Gen-Sheng</creator><creator>Liao, Xiao-Feng</creator><creator>Yang, Guang</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>7TB</scope><scope>8FD</scope><scope>F28</scope><scope>FR3</scope><scope>H8D</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0001-7344-7733</orcidid><orcidid>https://orcid.org/0000-0002-1646-637X</orcidid><orcidid>https://orcid.org/0000-0002-1751-1742</orcidid><orcidid>https://orcid.org/0000-0002-3566-8161</orcidid><orcidid>https://orcid.org/0000-0001-6733-5014</orcidid></search><sort><creationdate>20230901</creationdate><title>Global Transformer and Dual Local Attention Network via Deep-Shallow Hierarchical Feature Fusion for Retinal Vessel Segmentation</title><author>Li, Yang ; Zhang, Yue ; Liu, Jing-Yu ; Wang, Kang ; Zhang, Kai ; Zhang, Gen-Sheng ; Liao, Xiao-Feng ; Yang, Guang</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c392t-69f2a16ac0a206ad5fd53e6ee055f156cfb0806e9f9c3a9a0778a70ac3500db13</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Algorithms</topic><topic>Blood vessels</topic><topic>Decoding</topic><topic>Deep-shallow hierarchical feature fusion (dsHFF)</topic><topic>dual local attention (DLA)</topic><topic>Edge detection</topic><topic>Feature extraction</topic><topic>global transformer (GT)</topic><topic>Image edge detection</topic><topic>Image segmentation</topic><topic>medical image analysis</topic><topic>Medical imaging</topic><topic>Retinal images</topic><topic>retinal vessel segmentation</topic><topic>Retinal vessels</topic><topic>Transformers</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Li, Yang</creatorcontrib><creatorcontrib>Zhang, Yue</creatorcontrib><creatorcontrib>Liu, Jing-Yu</creatorcontrib><creatorcontrib>Wang, Kang</creatorcontrib><creatorcontrib>Zhang, Kai</creatorcontrib><creatorcontrib>Zhang, Gen-Sheng</creatorcontrib><creatorcontrib>Liao, Xiao-Feng</creatorcontrib><creatorcontrib>Yang, Guang</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998–Present</collection><collection>IEEE/IET Electronic Library</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Mechanical &amp; Transportation Engineering Abstracts</collection><collection>Technology Research Database</collection><collection>ANTE: Abstracts in New Technology &amp; Engineering</collection><collection>Engineering Research Database</collection><collection>Aerospace Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>MEDLINE - Academic</collection><jtitle>IEEE transactions on cybernetics</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Li, Yang</au><au>Zhang, Yue</au><au>Liu, Jing-Yu</au><au>Wang, Kang</au><au>Zhang, Kai</au><au>Zhang, Gen-Sheng</au><au>Liao, Xiao-Feng</au><au>Yang, Guang</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Global Transformer and Dual Local Attention Network via Deep-Shallow Hierarchical Feature Fusion for Retinal Vessel Segmentation</atitle><jtitle>IEEE transactions on cybernetics</jtitle><stitle>TCYB</stitle><addtitle>IEEE Trans Cybern</addtitle><date>2023-09-01</date><risdate>2023</risdate><volume>53</volume><issue>9</issue><spage>5826</spage><epage>5839</epage><pages>5826-5839</pages><issn>2168-2267</issn><eissn>2168-2275</eissn><coden>ITCEB8</coden><abstract>Clinically, retinal vessel segmentation is a significant step in the diagnosis of fundus diseases. However, recent methods generally neglect the difference of semantic information between deep and shallow features, which fail to capture the global and local characterizations in fundus images simultaneously, resulting in the limited segmentation performance for fine vessels. In this article, a global transformer (GT) and dual local attention (DLA) network via deep-shallow hierarchical feature fusion (GT-DLA-dsHFF) are investigated to solve the above limitations. First, the GT is developed to integrate the global information in the retinal image, which effectively captures the long-distance dependence between pixels, alleviating the discontinuity of blood vessels in the segmentation results. Second, DLA, which is constructed using dilated convolutions with varied dilation rates, unsupervised edge detection, and squeeze-excitation block, is proposed to extract local vessel information, consolidating the edge details in the segmentation result. Finally, a novel deep-shallow hierarchical feature fusion (dsHFF) algorithm is studied to fuse the features in different scales in the deep learning framework, respectively, which can mitigate the attenuation of valid information in the process of feature fusion. We verified the GT-DLA-dsHFF on four typical fundus image datasets. The experimental results demonstrate our GT-DLA-dsHFF achieves superior performance against the current methods and detailed discussions verify the efficacy of the proposed three modules. Segmentation results of diseased images show the robustness of our proposed GT-DLA-dsHFF. Implementation codes will be available on https://github.com/YangLibuaa/GT-DLA-dsHFF .</abstract><cop>United States</cop><pub>IEEE</pub><pmid>35984806</pmid><doi>10.1109/TCYB.2022.3194099</doi><tpages>14</tpages><orcidid>https://orcid.org/0000-0001-7344-7733</orcidid><orcidid>https://orcid.org/0000-0002-1646-637X</orcidid><orcidid>https://orcid.org/0000-0002-1751-1742</orcidid><orcidid>https://orcid.org/0000-0002-3566-8161</orcidid><orcidid>https://orcid.org/0000-0001-6733-5014</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 2168-2267
ispartof IEEE transactions on cybernetics, 2023-09, Vol.53 (9), p.5826-5839
issn 2168-2267
2168-2275
language eng
recordid cdi_pubmed_primary_35984806
source IEEE/IET Electronic Library
subjects Algorithms
Blood vessels
Decoding
Deep-shallow hierarchical feature fusion (dsHFF)
dual local attention (DLA)
Edge detection
Feature extraction
global transformer (GT)
Image edge detection
Image segmentation
medical image analysis
Medical imaging
Retinal images
retinal vessel segmentation
Retinal vessels
Transformers
title Global Transformer and Dual Local Attention Network via Deep-Shallow Hierarchical Feature Fusion for Retinal Vessel Segmentation
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-29T09%3A10%3A43IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Global%20Transformer%20and%20Dual%20Local%20Attention%20Network%20via%20Deep-Shallow%20Hierarchical%20Feature%20Fusion%20for%20Retinal%20Vessel%20Segmentation&rft.jtitle=IEEE%20transactions%20on%20cybernetics&rft.au=Li,%20Yang&rft.date=2023-09-01&rft.volume=53&rft.issue=9&rft.spage=5826&rft.epage=5839&rft.pages=5826-5839&rft.issn=2168-2267&rft.eissn=2168-2275&rft.coden=ITCEB8&rft_id=info:doi/10.1109/TCYB.2022.3194099&rft_dat=%3Cproquest_RIE%3E2853025552%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2853025552&rft_id=info:pmid/35984806&rft_ieee_id=9863763&rfr_iscdi=true