Exploring Multi-Level Attention and Semantic Relationship for Remote Sensing Image Captioning
Remote sensing image captioning, which aims to understand high-level semantic information and interactions of different ground objects, is a new emerging research topic in recent years. Though image captioning has developed rapidly with convolutional neural networks (CNNs) and recurrent neural netwo...
Gespeichert in:
Veröffentlicht in: | IEEE access 2020, Vol.8, p.2608-2620 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 2620 |
---|---|
container_issue | |
container_start_page | 2608 |
container_title | IEEE access |
container_volume | 8 |
creator | Yuan, Zhenghang Li, Xuelong Wang, Qi |
description | Remote sensing image captioning, which aims to understand high-level semantic information and interactions of different ground objects, is a new emerging research topic in recent years. Though image captioning has developed rapidly with convolutional neural networks (CNNs) and recurrent neural networks (RNNs), the image captioning task for remote sensing images still suffers from two main limitations. One limitation is that the scales of objects in remote sensing images vary dramatically, which makes it difficult to obtain an effective image representation. Another limitation is that the visual relationship in remote sensing images is still underused, which should have great potential to improve the final performance. In order to deal with these two limitations, an effective framework for captioning the remote sensing image is proposed in this paper. The framework is based on multi-level attention and multi-label attribute graph convolution. Specifically, the proposed multi-level attention module can adaptively focus not only on specific spatial features, but also on features of specific scales. Moreover, the designed attribute graph convolution module can employ the attribute-graph to learn more effective attribute features for image captioning. Extensive experiments are conducted and the proposed method achieves superior performance on UCM-captions, Sydney-captions and RSICD dataset. |
doi_str_mv | 10.1109/ACCESS.2019.2962195 |
format | Article |
fullrecord | <record><control><sourceid>proquest_ieee_</sourceid><recordid>TN_cdi_ieee_primary_8943170</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>8943170</ieee_id><doaj_id>oai_doaj_org_article_6291be876e564600824f235b8c4ad73c</doaj_id><sourcerecordid>2454716848</sourcerecordid><originalsourceid>FETCH-LOGICAL-c408t-93b8beadb119009d49cd32f34ead55fc7d365c14977abb83c2c037012ca9f7533</originalsourceid><addsrcrecordid>eNpNkVtLwzAcxYsoONRPsJeCz525Xx5HmTqYCE4fJaTpv7Oja2raiX57UytiXpL8OOfkcpJkjtECY6Rvlnm-2m4XBGG9IFoQrPlJMiNY6IxyKk7_rc-Tq77fozhURFzOktfVZ9f4ULe79OHYDHW2gQ9o0uUwQDvUvk1tW6ZbONi4c-kTNHak_VvdpZUPERz8AFHQ9mPE-mB3kOa2G0URXCZnlW16uPqdL5KX29Vzfp9tHu_W-XKTOYbUkGlaqAJsWWCsEdIl066kpKIsMs4rJ0squMNMS2mLQlFHHKISYeKsriSn9CJZT7mlt3vThfpgw5fxtjY_wIedsSE-oAEjiMYFKCmACybiRxBWEcoL5ZgtJXUx63rK6oJ_P0I_mL0_hjZe3xDGmcRCMRVVdFK54Ps-QPV3KkZmrMVMtZixFvNbS3TNJ1cNAH8OpRnFEtFvctSIcg</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2454716848</pqid></control><display><type>article</type><title>Exploring Multi-Level Attention and Semantic Relationship for Remote Sensing Image Captioning</title><source>IEEE Open Access Journals</source><source>DOAJ Directory of Open Access Journals</source><source>EZB-FREE-00999 freely available EZB journals</source><creator>Yuan, Zhenghang ; Li, Xuelong ; Wang, Qi</creator><creatorcontrib>Yuan, Zhenghang ; Li, Xuelong ; Wang, Qi</creatorcontrib><description>Remote sensing image captioning, which aims to understand high-level semantic information and interactions of different ground objects, is a new emerging research topic in recent years. Though image captioning has developed rapidly with convolutional neural networks (CNNs) and recurrent neural networks (RNNs), the image captioning task for remote sensing images still suffers from two main limitations. One limitation is that the scales of objects in remote sensing images vary dramatically, which makes it difficult to obtain an effective image representation. Another limitation is that the visual relationship in remote sensing images is still underused, which should have great potential to improve the final performance. In order to deal with these two limitations, an effective framework for captioning the remote sensing image is proposed in this paper. The framework is based on multi-level attention and multi-label attribute graph convolution. Specifically, the proposed multi-level attention module can adaptively focus not only on specific spatial features, but also on features of specific scales. Moreover, the designed attribute graph convolution module can employ the attribute-graph to learn more effective attribute features for image captioning. Extensive experiments are conducted and the proposed method achieves superior performance on UCM-captions, Sydney-captions and RSICD dataset.</description><identifier>ISSN: 2169-3536</identifier><identifier>EISSN: 2169-3536</identifier><identifier>DOI: 10.1109/ACCESS.2019.2962195</identifier><identifier>CODEN: IAECCG</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>Artificial neural networks ; Convolution ; deep learning ; Feature extraction ; graph convolutional networks (GCNs) ; image captioning ; Image representation ; Modules ; Neural networks ; Object recognition ; Recurrent neural networks ; Remote sensing ; Remote sensing image ; semantic understanding ; Semantics ; Task analysis ; Training ; Visualization</subject><ispartof>IEEE access, 2020, Vol.8, p.2608-2620</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2020</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c408t-93b8beadb119009d49cd32f34ead55fc7d365c14977abb83c2c037012ca9f7533</citedby><cites>FETCH-LOGICAL-c408t-93b8beadb119009d49cd32f34ead55fc7d365c14977abb83c2c037012ca9f7533</cites><orcidid>0000-0002-0648-9973 ; 0000-0001-8130-3748 ; 0000-0002-7028-4956</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/8943170$$EHTML$$P50$$Gieee$$Hfree_for_read</linktohtml><link.rule.ids>314,776,780,860,2096,4010,27610,27900,27901,27902,54908</link.rule.ids></links><search><creatorcontrib>Yuan, Zhenghang</creatorcontrib><creatorcontrib>Li, Xuelong</creatorcontrib><creatorcontrib>Wang, Qi</creatorcontrib><title>Exploring Multi-Level Attention and Semantic Relationship for Remote Sensing Image Captioning</title><title>IEEE access</title><addtitle>Access</addtitle><description>Remote sensing image captioning, which aims to understand high-level semantic information and interactions of different ground objects, is a new emerging research topic in recent years. Though image captioning has developed rapidly with convolutional neural networks (CNNs) and recurrent neural networks (RNNs), the image captioning task for remote sensing images still suffers from two main limitations. One limitation is that the scales of objects in remote sensing images vary dramatically, which makes it difficult to obtain an effective image representation. Another limitation is that the visual relationship in remote sensing images is still underused, which should have great potential to improve the final performance. In order to deal with these two limitations, an effective framework for captioning the remote sensing image is proposed in this paper. The framework is based on multi-level attention and multi-label attribute graph convolution. Specifically, the proposed multi-level attention module can adaptively focus not only on specific spatial features, but also on features of specific scales. Moreover, the designed attribute graph convolution module can employ the attribute-graph to learn more effective attribute features for image captioning. Extensive experiments are conducted and the proposed method achieves superior performance on UCM-captions, Sydney-captions and RSICD dataset.</description><subject>Artificial neural networks</subject><subject>Convolution</subject><subject>deep learning</subject><subject>Feature extraction</subject><subject>graph convolutional networks (GCNs)</subject><subject>image captioning</subject><subject>Image representation</subject><subject>Modules</subject><subject>Neural networks</subject><subject>Object recognition</subject><subject>Recurrent neural networks</subject><subject>Remote sensing</subject><subject>Remote sensing image</subject><subject>semantic understanding</subject><subject>Semantics</subject><subject>Task analysis</subject><subject>Training</subject><subject>Visualization</subject><issn>2169-3536</issn><issn>2169-3536</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>ESBDL</sourceid><sourceid>RIE</sourceid><sourceid>DOA</sourceid><recordid>eNpNkVtLwzAcxYsoONRPsJeCz525Xx5HmTqYCE4fJaTpv7Oja2raiX57UytiXpL8OOfkcpJkjtECY6Rvlnm-2m4XBGG9IFoQrPlJMiNY6IxyKk7_rc-Tq77fozhURFzOktfVZ9f4ULe79OHYDHW2gQ9o0uUwQDvUvk1tW6ZbONi4c-kTNHak_VvdpZUPERz8AFHQ9mPE-mB3kOa2G0URXCZnlW16uPqdL5KX29Vzfp9tHu_W-XKTOYbUkGlaqAJsWWCsEdIl066kpKIsMs4rJ0squMNMS2mLQlFHHKISYeKsriSn9CJZT7mlt3vThfpgw5fxtjY_wIedsSE-oAEjiMYFKCmACybiRxBWEcoL5ZgtJXUx63rK6oJ_P0I_mL0_hjZe3xDGmcRCMRVVdFK54Ps-QPV3KkZmrMVMtZixFvNbS3TNJ1cNAH8OpRnFEtFvctSIcg</recordid><startdate>2020</startdate><enddate>2020</enddate><creator>Yuan, Zhenghang</creator><creator>Li, Xuelong</creator><creator>Wang, Qi</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>ESBDL</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>7SR</scope><scope>8BQ</scope><scope>8FD</scope><scope>JG9</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>DOA</scope><orcidid>https://orcid.org/0000-0002-0648-9973</orcidid><orcidid>https://orcid.org/0000-0001-8130-3748</orcidid><orcidid>https://orcid.org/0000-0002-7028-4956</orcidid></search><sort><creationdate>2020</creationdate><title>Exploring Multi-Level Attention and Semantic Relationship for Remote Sensing Image Captioning</title><author>Yuan, Zhenghang ; Li, Xuelong ; Wang, Qi</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c408t-93b8beadb119009d49cd32f34ead55fc7d365c14977abb83c2c037012ca9f7533</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Artificial neural networks</topic><topic>Convolution</topic><topic>deep learning</topic><topic>Feature extraction</topic><topic>graph convolutional networks (GCNs)</topic><topic>image captioning</topic><topic>Image representation</topic><topic>Modules</topic><topic>Neural networks</topic><topic>Object recognition</topic><topic>Recurrent neural networks</topic><topic>Remote sensing</topic><topic>Remote sensing image</topic><topic>semantic understanding</topic><topic>Semantics</topic><topic>Task analysis</topic><topic>Training</topic><topic>Visualization</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Yuan, Zhenghang</creatorcontrib><creatorcontrib>Li, Xuelong</creatorcontrib><creatorcontrib>Wang, Qi</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE Open Access Journals</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Engineered Materials Abstracts</collection><collection>METADEX</collection><collection>Technology Research Database</collection><collection>Materials Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>DOAJ Directory of Open Access Journals</collection><jtitle>IEEE access</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Yuan, Zhenghang</au><au>Li, Xuelong</au><au>Wang, Qi</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Exploring Multi-Level Attention and Semantic Relationship for Remote Sensing Image Captioning</atitle><jtitle>IEEE access</jtitle><stitle>Access</stitle><date>2020</date><risdate>2020</risdate><volume>8</volume><spage>2608</spage><epage>2620</epage><pages>2608-2620</pages><issn>2169-3536</issn><eissn>2169-3536</eissn><coden>IAECCG</coden><abstract>Remote sensing image captioning, which aims to understand high-level semantic information and interactions of different ground objects, is a new emerging research topic in recent years. Though image captioning has developed rapidly with convolutional neural networks (CNNs) and recurrent neural networks (RNNs), the image captioning task for remote sensing images still suffers from two main limitations. One limitation is that the scales of objects in remote sensing images vary dramatically, which makes it difficult to obtain an effective image representation. Another limitation is that the visual relationship in remote sensing images is still underused, which should have great potential to improve the final performance. In order to deal with these two limitations, an effective framework for captioning the remote sensing image is proposed in this paper. The framework is based on multi-level attention and multi-label attribute graph convolution. Specifically, the proposed multi-level attention module can adaptively focus not only on specific spatial features, but also on features of specific scales. Moreover, the designed attribute graph convolution module can employ the attribute-graph to learn more effective attribute features for image captioning. Extensive experiments are conducted and the proposed method achieves superior performance on UCM-captions, Sydney-captions and RSICD dataset.</abstract><cop>Piscataway</cop><pub>IEEE</pub><doi>10.1109/ACCESS.2019.2962195</doi><tpages>13</tpages><orcidid>https://orcid.org/0000-0002-0648-9973</orcidid><orcidid>https://orcid.org/0000-0001-8130-3748</orcidid><orcidid>https://orcid.org/0000-0002-7028-4956</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 2169-3536 |
ispartof | IEEE access, 2020, Vol.8, p.2608-2620 |
issn | 2169-3536 2169-3536 |
language | eng |
recordid | cdi_ieee_primary_8943170 |
source | IEEE Open Access Journals; DOAJ Directory of Open Access Journals; EZB-FREE-00999 freely available EZB journals |
subjects | Artificial neural networks Convolution deep learning Feature extraction graph convolutional networks (GCNs) image captioning Image representation Modules Neural networks Object recognition Recurrent neural networks Remote sensing Remote sensing image semantic understanding Semantics Task analysis Training Visualization |
title | Exploring Multi-Level Attention and Semantic Relationship for Remote Sensing Image Captioning |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-30T02%3A19%3A47IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_ieee_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Exploring%20Multi-Level%20Attention%20and%20Semantic%20Relationship%20for%20Remote%20Sensing%20Image%20Captioning&rft.jtitle=IEEE%20access&rft.au=Yuan,%20Zhenghang&rft.date=2020&rft.volume=8&rft.spage=2608&rft.epage=2620&rft.pages=2608-2620&rft.issn=2169-3536&rft.eissn=2169-3536&rft.coden=IAECCG&rft_id=info:doi/10.1109/ACCESS.2019.2962195&rft_dat=%3Cproquest_ieee_%3E2454716848%3C/proquest_ieee_%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2454716848&rft_id=info:pmid/&rft_ieee_id=8943170&rft_doaj_id=oai_doaj_org_article_6291be876e564600824f235b8c4ad73c&rfr_iscdi=true |