Attention-Guided Pyramid Context Network for Polyp Segmentation in Colonoscopy Images

Recently, deep convolutional neural networks (C-NNs) have provided us an effective tool for automated polyp segmentation in colonoscopy images. However, most CNN-based methods do not fully consider the feature interaction among different layers and often cannot provide satisfactory segmentation perf...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on instrumentation and measurement 2023-01, Vol.72, p.1-1
Hauptverfasser: Yue, Guanghui, Li, Siying, Cong, Runmin, Zhou, Tianwei, Lei, Baiying, Wang, Tianfu
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 1
container_issue
container_start_page 1
container_title IEEE transactions on instrumentation and measurement
container_volume 72
creator Yue, Guanghui
Li, Siying
Cong, Runmin
Zhou, Tianwei
Lei, Baiying
Wang, Tianfu
description Recently, deep convolutional neural networks (C-NNs) have provided us an effective tool for automated polyp segmentation in colonoscopy images. However, most CNN-based methods do not fully consider the feature interaction among different layers and often cannot provide satisfactory segmentation performance. In this paper, a novel attention-guided pyramid context network (APCNet) is proposed for accurate and robust polyp segmentation in colonoscopy images. Specifically, considering that different network layers represent the polyp in different aspects, APCNet first extracts multi-layer features in a pyramid structure, then utilizes an attention-guided multi-layer aggregation strategy to refine the context features of each layer by utilizing the complementary information of different layers. To obtain abundant context features, APCNet employs a context extraction module that explores the context information of each layer via local information retainment and global information compaction. Through the top-down deep supervision, our APCNet implements a coarse-to-fine polyp segmentation and finally localizes the polyp region precisely. Extensive experiments on two in-domain and four out-of-domain experiments show that APCNet is comparable to 19 state-of-the-art methods. Moreover, it holds a more appropriate trade-off between effectiveness and computational complexity than these competing methods.
doi_str_mv 10.1109/TIM.2023.3244219
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_proquest_journals_2784549706</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10058111</ieee_id><sourcerecordid>2784549706</sourcerecordid><originalsourceid>FETCH-LOGICAL-c292t-389517d4e6872a38910fa3e64791cc1678e02008e4c238c5bc0130a0a1776f803</originalsourceid><addsrcrecordid>eNpNkM9LwzAUgIMoOKd3Dx4KnjvfS9L8OI6hczB14HYusU1H59rUJEP739sxD57Cg-97L3yE3CJMEEE_rBcvEwqUTRjlnKI-IyPMMplqIeg5GQGgSjXPxCW5CmEHAFJwOSKbaYy2jbVr0_mhLm2ZrHpvmrpMZq6N9icmrzZ-O_-ZVM4nK7fvu-TdbpvBMUcrqduB3LvWhcJ1fbJozNaGa3JRmX2wN3_vmGyeHtez53T5Nl_Mpsu0oJrGlCmdoSy5FUpSM0wIlWF2-JjGokAhlQUKoCwvKFNF9lEAMjBgUEpRKWBjcn_a23n3dbAh5jt38O1wMqdS8YxrCWKg4EQV3oXgbZV3vm6M73OE_BgvH-Llx3j5X7xBuTsptbX2Hw6ZQkT2C8SbaeY</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2784549706</pqid></control><display><type>article</type><title>Attention-Guided Pyramid Context Network for Polyp Segmentation in Colonoscopy Images</title><source>IEEE Electronic Library (IEL)</source><creator>Yue, Guanghui ; Li, Siying ; Cong, Runmin ; Zhou, Tianwei ; Lei, Baiying ; Wang, Tianfu</creator><creatorcontrib>Yue, Guanghui ; Li, Siying ; Cong, Runmin ; Zhou, Tianwei ; Lei, Baiying ; Wang, Tianfu</creatorcontrib><description>Recently, deep convolutional neural networks (C-NNs) have provided us an effective tool for automated polyp segmentation in colonoscopy images. However, most CNN-based methods do not fully consider the feature interaction among different layers and often cannot provide satisfactory segmentation performance. In this paper, a novel attention-guided pyramid context network (APCNet) is proposed for accurate and robust polyp segmentation in colonoscopy images. Specifically, considering that different network layers represent the polyp in different aspects, APCNet first extracts multi-layer features in a pyramid structure, then utilizes an attention-guided multi-layer aggregation strategy to refine the context features of each layer by utilizing the complementary information of different layers. To obtain abundant context features, APCNet employs a context extraction module that explores the context information of each layer via local information retainment and global information compaction. Through the top-down deep supervision, our APCNet implements a coarse-to-fine polyp segmentation and finally localizes the polyp region precisely. Extensive experiments on two in-domain and four out-of-domain experiments show that APCNet is comparable to 19 state-of-the-art methods. Moreover, it holds a more appropriate trade-off between effectiveness and computational complexity than these competing methods.</description><identifier>ISSN: 0018-9456</identifier><identifier>EISSN: 1557-9662</identifier><identifier>DOI: 10.1109/TIM.2023.3244219</identifier><identifier>CODEN: IEIMAO</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>Artificial neural networks ; attention ; Colonoscopy ; colonoscopy image ; Context ; Data mining ; deep learning ; Domains ; Feature extraction ; Image segmentation ; Multilayers ; polyp segmentation ; Pyramid context network ; Semantics ; Task analysis ; Transformers</subject><ispartof>IEEE transactions on instrumentation and measurement, 2023-01, Vol.72, p.1-1</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c292t-389517d4e6872a38910fa3e64791cc1678e02008e4c238c5bc0130a0a1776f803</citedby><cites>FETCH-LOGICAL-c292t-389517d4e6872a38910fa3e64791cc1678e02008e4c238c5bc0130a0a1776f803</cites><orcidid>0000-0002-3087-2550 ; 0000-0003-3533-7204 ; 0000-0003-0972-4008 ; 0000-0002-9752-8738 ; 0000-0002-6761-8767 ; 0000-0002-1248-1214</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10058111$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,27924,27925,54758</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/10058111$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Yue, Guanghui</creatorcontrib><creatorcontrib>Li, Siying</creatorcontrib><creatorcontrib>Cong, Runmin</creatorcontrib><creatorcontrib>Zhou, Tianwei</creatorcontrib><creatorcontrib>Lei, Baiying</creatorcontrib><creatorcontrib>Wang, Tianfu</creatorcontrib><title>Attention-Guided Pyramid Context Network for Polyp Segmentation in Colonoscopy Images</title><title>IEEE transactions on instrumentation and measurement</title><addtitle>TIM</addtitle><description>Recently, deep convolutional neural networks (C-NNs) have provided us an effective tool for automated polyp segmentation in colonoscopy images. However, most CNN-based methods do not fully consider the feature interaction among different layers and often cannot provide satisfactory segmentation performance. In this paper, a novel attention-guided pyramid context network (APCNet) is proposed for accurate and robust polyp segmentation in colonoscopy images. Specifically, considering that different network layers represent the polyp in different aspects, APCNet first extracts multi-layer features in a pyramid structure, then utilizes an attention-guided multi-layer aggregation strategy to refine the context features of each layer by utilizing the complementary information of different layers. To obtain abundant context features, APCNet employs a context extraction module that explores the context information of each layer via local information retainment and global information compaction. Through the top-down deep supervision, our APCNet implements a coarse-to-fine polyp segmentation and finally localizes the polyp region precisely. Extensive experiments on two in-domain and four out-of-domain experiments show that APCNet is comparable to 19 state-of-the-art methods. Moreover, it holds a more appropriate trade-off between effectiveness and computational complexity than these competing methods.</description><subject>Artificial neural networks</subject><subject>attention</subject><subject>Colonoscopy</subject><subject>colonoscopy image</subject><subject>Context</subject><subject>Data mining</subject><subject>deep learning</subject><subject>Domains</subject><subject>Feature extraction</subject><subject>Image segmentation</subject><subject>Multilayers</subject><subject>polyp segmentation</subject><subject>Pyramid context network</subject><subject>Semantics</subject><subject>Task analysis</subject><subject>Transformers</subject><issn>0018-9456</issn><issn>1557-9662</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpNkM9LwzAUgIMoOKd3Dx4KnjvfS9L8OI6hczB14HYusU1H59rUJEP739sxD57Cg-97L3yE3CJMEEE_rBcvEwqUTRjlnKI-IyPMMplqIeg5GQGgSjXPxCW5CmEHAFJwOSKbaYy2jbVr0_mhLm2ZrHpvmrpMZq6N9icmrzZ-O_-ZVM4nK7fvu-TdbpvBMUcrqduB3LvWhcJ1fbJozNaGa3JRmX2wN3_vmGyeHtez53T5Nl_Mpsu0oJrGlCmdoSy5FUpSM0wIlWF2-JjGokAhlQUKoCwvKFNF9lEAMjBgUEpRKWBjcn_a23n3dbAh5jt38O1wMqdS8YxrCWKg4EQV3oXgbZV3vm6M73OE_BgvH-Llx3j5X7xBuTsptbX2Hw6ZQkT2C8SbaeY</recordid><startdate>20230101</startdate><enddate>20230101</enddate><creator>Yue, Guanghui</creator><creator>Li, Siying</creator><creator>Cong, Runmin</creator><creator>Zhou, Tianwei</creator><creator>Lei, Baiying</creator><creator>Wang, Tianfu</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SP</scope><scope>7U5</scope><scope>8FD</scope><scope>L7M</scope><orcidid>https://orcid.org/0000-0002-3087-2550</orcidid><orcidid>https://orcid.org/0000-0003-3533-7204</orcidid><orcidid>https://orcid.org/0000-0003-0972-4008</orcidid><orcidid>https://orcid.org/0000-0002-9752-8738</orcidid><orcidid>https://orcid.org/0000-0002-6761-8767</orcidid><orcidid>https://orcid.org/0000-0002-1248-1214</orcidid></search><sort><creationdate>20230101</creationdate><title>Attention-Guided Pyramid Context Network for Polyp Segmentation in Colonoscopy Images</title><author>Yue, Guanghui ; Li, Siying ; Cong, Runmin ; Zhou, Tianwei ; Lei, Baiying ; Wang, Tianfu</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c292t-389517d4e6872a38910fa3e64791cc1678e02008e4c238c5bc0130a0a1776f803</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Artificial neural networks</topic><topic>attention</topic><topic>Colonoscopy</topic><topic>colonoscopy image</topic><topic>Context</topic><topic>Data mining</topic><topic>deep learning</topic><topic>Domains</topic><topic>Feature extraction</topic><topic>Image segmentation</topic><topic>Multilayers</topic><topic>polyp segmentation</topic><topic>Pyramid context network</topic><topic>Semantics</topic><topic>Task analysis</topic><topic>Transformers</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Yue, Guanghui</creatorcontrib><creatorcontrib>Li, Siying</creatorcontrib><creatorcontrib>Cong, Runmin</creatorcontrib><creatorcontrib>Zhou, Tianwei</creatorcontrib><creatorcontrib>Lei, Baiying</creatorcontrib><creatorcontrib>Wang, Tianfu</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Solid State and Superconductivity Abstracts</collection><collection>Technology Research Database</collection><collection>Advanced Technologies Database with Aerospace</collection><jtitle>IEEE transactions on instrumentation and measurement</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Yue, Guanghui</au><au>Li, Siying</au><au>Cong, Runmin</au><au>Zhou, Tianwei</au><au>Lei, Baiying</au><au>Wang, Tianfu</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Attention-Guided Pyramid Context Network for Polyp Segmentation in Colonoscopy Images</atitle><jtitle>IEEE transactions on instrumentation and measurement</jtitle><stitle>TIM</stitle><date>2023-01-01</date><risdate>2023</risdate><volume>72</volume><spage>1</spage><epage>1</epage><pages>1-1</pages><issn>0018-9456</issn><eissn>1557-9662</eissn><coden>IEIMAO</coden><abstract>Recently, deep convolutional neural networks (C-NNs) have provided us an effective tool for automated polyp segmentation in colonoscopy images. However, most CNN-based methods do not fully consider the feature interaction among different layers and often cannot provide satisfactory segmentation performance. In this paper, a novel attention-guided pyramid context network (APCNet) is proposed for accurate and robust polyp segmentation in colonoscopy images. Specifically, considering that different network layers represent the polyp in different aspects, APCNet first extracts multi-layer features in a pyramid structure, then utilizes an attention-guided multi-layer aggregation strategy to refine the context features of each layer by utilizing the complementary information of different layers. To obtain abundant context features, APCNet employs a context extraction module that explores the context information of each layer via local information retainment and global information compaction. Through the top-down deep supervision, our APCNet implements a coarse-to-fine polyp segmentation and finally localizes the polyp region precisely. Extensive experiments on two in-domain and four out-of-domain experiments show that APCNet is comparable to 19 state-of-the-art methods. Moreover, it holds a more appropriate trade-off between effectiveness and computational complexity than these competing methods.</abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/TIM.2023.3244219</doi><tpages>1</tpages><orcidid>https://orcid.org/0000-0002-3087-2550</orcidid><orcidid>https://orcid.org/0000-0003-3533-7204</orcidid><orcidid>https://orcid.org/0000-0003-0972-4008</orcidid><orcidid>https://orcid.org/0000-0002-9752-8738</orcidid><orcidid>https://orcid.org/0000-0002-6761-8767</orcidid><orcidid>https://orcid.org/0000-0002-1248-1214</orcidid></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 0018-9456
ispartof IEEE transactions on instrumentation and measurement, 2023-01, Vol.72, p.1-1
issn 0018-9456
1557-9662
language eng
recordid cdi_proquest_journals_2784549706
source IEEE Electronic Library (IEL)
subjects Artificial neural networks
attention
Colonoscopy
colonoscopy image
Context
Data mining
deep learning
Domains
Feature extraction
Image segmentation
Multilayers
polyp segmentation
Pyramid context network
Semantics
Task analysis
Transformers
title Attention-Guided Pyramid Context Network for Polyp Segmentation in Colonoscopy Images
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-05T18%3A10%3A43IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Attention-Guided%20Pyramid%20Context%20Network%20for%20Polyp%20Segmentation%20in%20Colonoscopy%20Images&rft.jtitle=IEEE%20transactions%20on%20instrumentation%20and%20measurement&rft.au=Yue,%20Guanghui&rft.date=2023-01-01&rft.volume=72&rft.spage=1&rft.epage=1&rft.pages=1-1&rft.issn=0018-9456&rft.eissn=1557-9662&rft.coden=IEIMAO&rft_id=info:doi/10.1109/TIM.2023.3244219&rft_dat=%3Cproquest_RIE%3E2784549706%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2784549706&rft_id=info:pmid/&rft_ieee_id=10058111&rfr_iscdi=true