Cross-modal attention guided visual reasoning for referring image segmentation

The goal of referring image segmentation (RIS) is to generate the foreground mask of the object described by a natural language expression. The key of RIS is to learn the valid multimodal features between visual and linguistic modalities to identify the referred object accurately. In this paper, a c...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Multimedia tools and applications 2023-08, Vol.82 (19), p.28853-28872
Hauptverfasser: Zhang, Wenjing, Hu, Mengnan, Tan, Quange, Zhou, Qianli, Wang, Rong
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 28872
container_issue 19
container_start_page 28853
container_title Multimedia tools and applications
container_volume 82
creator Zhang, Wenjing
Hu, Mengnan
Tan, Quange
Zhou, Qianli
Wang, Rong
description The goal of referring image segmentation (RIS) is to generate the foreground mask of the object described by a natural language expression. The key of RIS is to learn the valid multimodal features between visual and linguistic modalities to identify the referred object accurately. In this paper, a cross-modal attention-guided visual reasoning model for referring segmentation is proposed. First, the multi-scale detailed information is captured by a pyramidal convolution module to enhance visual representation. Then, the entity words of the referring expression and relevant image regions are aligned by a cross-modal attention mechanism. Based on this, all the entities described by the expression can be identified. Finally, a fully connected multimodal graph is constructed with multimodal features and relationship cues of expressions. Visual reasoning is performed stepwisely on the graph to highlight the correct entity whiling suppressing other irrelevant ones. The experiment results on four benchmark datasets show that the proposed method achieves performance improvement (e.g., +1.13% on UNC, +3.06% on UNC+, +2.1% on G-Ref, and 1.11% on ReferIt). Also, the effectiveness and feasibility of each component of our method are verified by extensive ablation studies.
doi_str_mv 10.1007/s11042-023-14586-9
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2840671250</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2840671250</sourcerecordid><originalsourceid>FETCH-LOGICAL-c270t-549c0ed7427ff597ec09845a8848841ae7b15f0edb7e757f7039d6f9495a4ed53</originalsourceid><addsrcrecordid>eNp9kE1LxDAQhoMouK7-AU8Fz9FJmnSaoyzqCote9Byy7aR02W3XpBX896ZW8CYMZCa8z3y8jF0LuBUAeBeFACU5yJwLpcuCmxO2EBpzjijFacrzEjhqEOfsIsYdgCi0VAv2sgp9jPzQ126fuWGgbmj7LmvGtqY6-2zjmP4Dudh3bddkvg-p8hTCVLUH11AWqTkkzE3gJTvzbh_p6vddsvfHh7fVmm9en55X9xteSYSBa2UqoBqVRO-1QarAlEq7slQphCPcCu2TYouEGj1CburCG2W0U1TrfMlu5r7H0H-MFAe768fQpZFWlgoKFFJDUslZVU1Hpr3tMaSdw5cVYCff7OybTb7ZH9-sSVA-Q_E4HUnhr_U_1Dek-nC3</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2840671250</pqid></control><display><type>article</type><title>Cross-modal attention guided visual reasoning for referring image segmentation</title><source>SpringerLink Journals - AutoHoldings</source><creator>Zhang, Wenjing ; Hu, Mengnan ; Tan, Quange ; Zhou, Qianli ; Wang, Rong</creator><creatorcontrib>Zhang, Wenjing ; Hu, Mengnan ; Tan, Quange ; Zhou, Qianli ; Wang, Rong</creatorcontrib><description>The goal of referring image segmentation (RIS) is to generate the foreground mask of the object described by a natural language expression. The key of RIS is to learn the valid multimodal features between visual and linguistic modalities to identify the referred object accurately. In this paper, a cross-modal attention-guided visual reasoning model for referring segmentation is proposed. First, the multi-scale detailed information is captured by a pyramidal convolution module to enhance visual representation. Then, the entity words of the referring expression and relevant image regions are aligned by a cross-modal attention mechanism. Based on this, all the entities described by the expression can be identified. Finally, a fully connected multimodal graph is constructed with multimodal features and relationship cues of expressions. Visual reasoning is performed stepwisely on the graph to highlight the correct entity whiling suppressing other irrelevant ones. The experiment results on four benchmark datasets show that the proposed method achieves performance improvement (e.g., +1.13% on UNC, +3.06% on UNC+, +2.1% on G-Ref, and 1.11% on ReferIt). Also, the effectiveness and feasibility of each component of our method are verified by extensive ablation studies.</description><identifier>ISSN: 1380-7501</identifier><identifier>EISSN: 1573-7721</identifier><identifier>DOI: 10.1007/s11042-023-14586-9</identifier><language>eng</language><publisher>New York: Springer US</publisher><subject>Ablation ; Computer Communication Networks ; Computer Science ; Data Structures and Information Theory ; Image segmentation ; Keywords ; Language ; Linguistics ; Methods ; Multimedia ; Multimedia Information Systems ; Natural language ; Neural networks ; Reasoning ; Semantics ; Special Purpose and Application-Based Systems</subject><ispartof>Multimedia tools and applications, 2023-08, Vol.82 (19), p.28853-28872</ispartof><rights>Crown 2023</rights><rights>Crown 2023.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c270t-549c0ed7427ff597ec09845a8848841ae7b15f0edb7e757f7039d6f9495a4ed53</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s11042-023-14586-9$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s11042-023-14586-9$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>314,780,784,27924,27925,41488,42557,51319</link.rule.ids></links><search><creatorcontrib>Zhang, Wenjing</creatorcontrib><creatorcontrib>Hu, Mengnan</creatorcontrib><creatorcontrib>Tan, Quange</creatorcontrib><creatorcontrib>Zhou, Qianli</creatorcontrib><creatorcontrib>Wang, Rong</creatorcontrib><title>Cross-modal attention guided visual reasoning for referring image segmentation</title><title>Multimedia tools and applications</title><addtitle>Multimed Tools Appl</addtitle><description>The goal of referring image segmentation (RIS) is to generate the foreground mask of the object described by a natural language expression. The key of RIS is to learn the valid multimodal features between visual and linguistic modalities to identify the referred object accurately. In this paper, a cross-modal attention-guided visual reasoning model for referring segmentation is proposed. First, the multi-scale detailed information is captured by a pyramidal convolution module to enhance visual representation. Then, the entity words of the referring expression and relevant image regions are aligned by a cross-modal attention mechanism. Based on this, all the entities described by the expression can be identified. Finally, a fully connected multimodal graph is constructed with multimodal features and relationship cues of expressions. Visual reasoning is performed stepwisely on the graph to highlight the correct entity whiling suppressing other irrelevant ones. The experiment results on four benchmark datasets show that the proposed method achieves performance improvement (e.g., +1.13% on UNC, +3.06% on UNC+, +2.1% on G-Ref, and 1.11% on ReferIt). Also, the effectiveness and feasibility of each component of our method are verified by extensive ablation studies.</description><subject>Ablation</subject><subject>Computer Communication Networks</subject><subject>Computer Science</subject><subject>Data Structures and Information Theory</subject><subject>Image segmentation</subject><subject>Keywords</subject><subject>Language</subject><subject>Linguistics</subject><subject>Methods</subject><subject>Multimedia</subject><subject>Multimedia Information Systems</subject><subject>Natural language</subject><subject>Neural networks</subject><subject>Reasoning</subject><subject>Semantics</subject><subject>Special Purpose and Application-Based Systems</subject><issn>1380-7501</issn><issn>1573-7721</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>8G5</sourceid><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GNUQQ</sourceid><sourceid>GUQSH</sourceid><sourceid>M2O</sourceid><recordid>eNp9kE1LxDAQhoMouK7-AU8Fz9FJmnSaoyzqCote9Byy7aR02W3XpBX896ZW8CYMZCa8z3y8jF0LuBUAeBeFACU5yJwLpcuCmxO2EBpzjijFacrzEjhqEOfsIsYdgCi0VAv2sgp9jPzQ126fuWGgbmj7LmvGtqY6-2zjmP4Dudh3bddkvg-p8hTCVLUH11AWqTkkzE3gJTvzbh_p6vddsvfHh7fVmm9en55X9xteSYSBa2UqoBqVRO-1QarAlEq7slQphCPcCu2TYouEGj1CburCG2W0U1TrfMlu5r7H0H-MFAe768fQpZFWlgoKFFJDUslZVU1Hpr3tMaSdw5cVYCff7OybTb7ZH9-sSVA-Q_E4HUnhr_U_1Dek-nC3</recordid><startdate>20230801</startdate><enddate>20230801</enddate><creator>Zhang, Wenjing</creator><creator>Hu, Mengnan</creator><creator>Tan, Quange</creator><creator>Zhou, Qianli</creator><creator>Wang, Rong</creator><general>Springer US</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7SC</scope><scope>7WY</scope><scope>7WZ</scope><scope>7XB</scope><scope>87Z</scope><scope>8AL</scope><scope>8AO</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FK</scope><scope>8FL</scope><scope>8G5</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BEZIV</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FRNLG</scope><scope>F~G</scope><scope>GNUQQ</scope><scope>GUQSH</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K60</scope><scope>K6~</scope><scope>K7-</scope><scope>L.-</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>M0C</scope><scope>M0N</scope><scope>M2O</scope><scope>MBDVC</scope><scope>P5Z</scope><scope>P62</scope><scope>PQBIZ</scope><scope>PQBZA</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>Q9U</scope></search><sort><creationdate>20230801</creationdate><title>Cross-modal attention guided visual reasoning for referring image segmentation</title><author>Zhang, Wenjing ; Hu, Mengnan ; Tan, Quange ; Zhou, Qianli ; Wang, Rong</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c270t-549c0ed7427ff597ec09845a8848841ae7b15f0edb7e757f7039d6f9495a4ed53</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Ablation</topic><topic>Computer Communication Networks</topic><topic>Computer Science</topic><topic>Data Structures and Information Theory</topic><topic>Image segmentation</topic><topic>Keywords</topic><topic>Language</topic><topic>Linguistics</topic><topic>Methods</topic><topic>Multimedia</topic><topic>Multimedia Information Systems</topic><topic>Natural language</topic><topic>Neural networks</topic><topic>Reasoning</topic><topic>Semantics</topic><topic>Special Purpose and Application-Based Systems</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Zhang, Wenjing</creatorcontrib><creatorcontrib>Hu, Mengnan</creatorcontrib><creatorcontrib>Tan, Quange</creatorcontrib><creatorcontrib>Zhou, Qianli</creatorcontrib><creatorcontrib>Wang, Rong</creatorcontrib><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>Computer and Information Systems Abstracts</collection><collection>ABI/INFORM Collection</collection><collection>ABI/INFORM Global (PDF only)</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>ABI/INFORM Global (Alumni Edition)</collection><collection>Computing Database (Alumni Edition)</collection><collection>ProQuest Pharma Collection</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ABI/INFORM Collection (Alumni Edition)</collection><collection>Research Library (Alumni Edition)</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Business Premium Collection</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>Business Premium Collection (Alumni)</collection><collection>ABI/INFORM Global (Corporate)</collection><collection>ProQuest Central Student</collection><collection>Research Library Prep</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>ProQuest Business Collection (Alumni Edition)</collection><collection>ProQuest Business Collection</collection><collection>Computer Science Database</collection><collection>ABI/INFORM Professional Advanced</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>ABI/INFORM Global</collection><collection>Computing Database</collection><collection>Research Library</collection><collection>Research Library (Corporate)</collection><collection>Advanced Technologies &amp; Aerospace Database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>One Business (ProQuest)</collection><collection>ProQuest One Business (Alumni)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central Basic</collection><jtitle>Multimedia tools and applications</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Zhang, Wenjing</au><au>Hu, Mengnan</au><au>Tan, Quange</au><au>Zhou, Qianli</au><au>Wang, Rong</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Cross-modal attention guided visual reasoning for referring image segmentation</atitle><jtitle>Multimedia tools and applications</jtitle><stitle>Multimed Tools Appl</stitle><date>2023-08-01</date><risdate>2023</risdate><volume>82</volume><issue>19</issue><spage>28853</spage><epage>28872</epage><pages>28853-28872</pages><issn>1380-7501</issn><eissn>1573-7721</eissn><abstract>The goal of referring image segmentation (RIS) is to generate the foreground mask of the object described by a natural language expression. The key of RIS is to learn the valid multimodal features between visual and linguistic modalities to identify the referred object accurately. In this paper, a cross-modal attention-guided visual reasoning model for referring segmentation is proposed. First, the multi-scale detailed information is captured by a pyramidal convolution module to enhance visual representation. Then, the entity words of the referring expression and relevant image regions are aligned by a cross-modal attention mechanism. Based on this, all the entities described by the expression can be identified. Finally, a fully connected multimodal graph is constructed with multimodal features and relationship cues of expressions. Visual reasoning is performed stepwisely on the graph to highlight the correct entity whiling suppressing other irrelevant ones. The experiment results on four benchmark datasets show that the proposed method achieves performance improvement (e.g., +1.13% on UNC, +3.06% on UNC+, +2.1% on G-Ref, and 1.11% on ReferIt). Also, the effectiveness and feasibility of each component of our method are verified by extensive ablation studies.</abstract><cop>New York</cop><pub>Springer US</pub><doi>10.1007/s11042-023-14586-9</doi><tpages>20</tpages></addata></record>
fulltext fulltext
identifier ISSN: 1380-7501
ispartof Multimedia tools and applications, 2023-08, Vol.82 (19), p.28853-28872
issn 1380-7501
1573-7721
language eng
recordid cdi_proquest_journals_2840671250
source SpringerLink Journals - AutoHoldings
subjects Ablation
Computer Communication Networks
Computer Science
Data Structures and Information Theory
Image segmentation
Keywords
Language
Linguistics
Methods
Multimedia
Multimedia Information Systems
Natural language
Neural networks
Reasoning
Semantics
Special Purpose and Application-Based Systems
title Cross-modal attention guided visual reasoning for referring image segmentation
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-06T19%3A34%3A09IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Cross-modal%20attention%20guided%20visual%20reasoning%20for%20referring%20image%20segmentation&rft.jtitle=Multimedia%20tools%20and%20applications&rft.au=Zhang,%20Wenjing&rft.date=2023-08-01&rft.volume=82&rft.issue=19&rft.spage=28853&rft.epage=28872&rft.pages=28853-28872&rft.issn=1380-7501&rft.eissn=1573-7721&rft_id=info:doi/10.1007/s11042-023-14586-9&rft_dat=%3Cproquest_cross%3E2840671250%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2840671250&rft_id=info:pmid/&rfr_iscdi=true