Multi-Layers Feature Fusion of Convolutional Neural Network for Scene Classification of Remote Sensing

Remote sensing scene classification is still a challenging task in remote sensing applications. How to effectively extract features from a dataset with limited scale is crucial for improvement of scene classification. Recently, convolutional neural network (CNN) performs impressively in different fi...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE access 2019, Vol.7, p.121685-121694
Hauptverfasser: Ma, Chenhui, Mu, Xiaodong, Sha, Dexuan
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 121694
container_issue
container_start_page 121685
container_title IEEE access
container_volume 7
creator Ma, Chenhui
Mu, Xiaodong
Sha, Dexuan
description Remote sensing scene classification is still a challenging task in remote sensing applications. How to effectively extract features from a dataset with limited scale is crucial for improvement of scene classification. Recently, convolutional neural network (CNN) performs impressively in different fields of computer vision and has been used for remote sensing. However, most works focus on the feature maps of the last convolution layer and pay little attention to the benefits of additional layers. In fact, the feature information hidden in different layers has potential for feature discrimination capacity. The most attention of this work is how to explore the potential of multiple layers from a CNN model. Therefore, this paper proposes multi-layers feature fusion based on CNN and designs a fusion module to solve relevant issues of fusion. In this module, firstly, all the feature maps are transformed to match sizes mutually due to infeasible fusion of feature maps with different scales; then, two fusion methods are introduced to integrate feature maps from different layers instead of the last convolution layer only; finally, the fusion of features are delivered to the next layer or classifier as the routine CNN does. The experimental results show that the suggested methods achieve promising performance on public datasets.
doi_str_mv 10.1109/ACCESS.2019.2936215
format Article
fullrecord <record><control><sourceid>proquest_ieee_</sourceid><recordid>TN_cdi_ieee_primary_8805301</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>8805301</ieee_id><doaj_id>oai_doaj_org_article_a485e576aa734cbd8e82c7a0f2f81e4c</doaj_id><sourcerecordid>2455604135</sourcerecordid><originalsourceid>FETCH-LOGICAL-c408t-66e2de2af84b473a257b0a2cfa228da7956954805a0cdad43f401842c6b561e63</originalsourceid><addsrcrecordid>eNpNUdtu1DAQtRBIVEu_oC-ReM7ie5zHKupCpQUkFp6tWWdceUnjYjug_n3dZlUxL3PRnDOXQ8gVo1vGaP_pehhuDoctp6zf8l5oztQbcsGZ7luhhH77X_yeXOZ8otVMLanugvivy1RCu4dHTLnZIZQlYbNbcohzE30zxPlvnJZSU5iab7ikF1f-xfS78TE1B4czNsMEOQcfHJQz8Afex4LNAecc5rsP5J2HKePl2W_Ir93Nz-FLu__--Xa43rdOUlNarZGPyMEbeZSdAK66IwXuPHBuRuh6VbeWhiqgboRRCi8pM5I7fVSaoRYbcrvyjhFO9iGFe0iPNkKwL4WY7iykEtyEFqRRqDoN0AnpjqNBw10H1HNvGEpXuT6uXA8p_lkwF3uKS6pvyJZLpTSVrP50Q8Ta5VLMOaF_ncqofdbHrvrYZ33sWZ-KulpRARFfEaZeJigTTytvjCc</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2455604135</pqid></control><display><type>article</type><title>Multi-Layers Feature Fusion of Convolutional Neural Network for Scene Classification of Remote Sensing</title><source>IEEE Open Access Journals</source><source>DOAJ Directory of Open Access Journals</source><source>Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals</source><creator>Ma, Chenhui ; Mu, Xiaodong ; Sha, Dexuan</creator><creatorcontrib>Ma, Chenhui ; Mu, Xiaodong ; Sha, Dexuan</creatorcontrib><description>Remote sensing scene classification is still a challenging task in remote sensing applications. How to effectively extract features from a dataset with limited scale is crucial for improvement of scene classification. Recently, convolutional neural network (CNN) performs impressively in different fields of computer vision and has been used for remote sensing. However, most works focus on the feature maps of the last convolution layer and pay little attention to the benefits of additional layers. In fact, the feature information hidden in different layers has potential for feature discrimination capacity. The most attention of this work is how to explore the potential of multiple layers from a CNN model. Therefore, this paper proposes multi-layers feature fusion based on CNN and designs a fusion module to solve relevant issues of fusion. In this module, firstly, all the feature maps are transformed to match sizes mutually due to infeasible fusion of feature maps with different scales; then, two fusion methods are introduced to integrate feature maps from different layers instead of the last convolution layer only; finally, the fusion of features are delivered to the next layer or classifier as the routine CNN does. The experimental results show that the suggested methods achieve promising performance on public datasets.</description><identifier>ISSN: 2169-3536</identifier><identifier>EISSN: 2169-3536</identifier><identifier>DOI: 10.1109/ACCESS.2019.2936215</identifier><identifier>CODEN: IAECCG</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>Artificial neural networks ; Classification ; Computer vision ; Convolution ; convolutional neural network ; Data mining ; Datasets ; Encoding ; Feature extraction ; Feature maps ; Modules ; Multi-layer feature fusion ; Multilayers ; Neural networks ; Remote sensing ; remote sensing image ; scene classification ; Semantics ; Shape</subject><ispartof>IEEE access, 2019, Vol.7, p.121685-121694</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2019</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c408t-66e2de2af84b473a257b0a2cfa228da7956954805a0cdad43f401842c6b561e63</citedby><cites>FETCH-LOGICAL-c408t-66e2de2af84b473a257b0a2cfa228da7956954805a0cdad43f401842c6b561e63</cites><orcidid>0000-0001-9494-0519</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/8805301$$EHTML$$P50$$Gieee$$Hfree_for_read</linktohtml><link.rule.ids>315,781,785,865,2103,4025,27638,27928,27929,27930,54938</link.rule.ids></links><search><creatorcontrib>Ma, Chenhui</creatorcontrib><creatorcontrib>Mu, Xiaodong</creatorcontrib><creatorcontrib>Sha, Dexuan</creatorcontrib><title>Multi-Layers Feature Fusion of Convolutional Neural Network for Scene Classification of Remote Sensing</title><title>IEEE access</title><addtitle>Access</addtitle><description>Remote sensing scene classification is still a challenging task in remote sensing applications. How to effectively extract features from a dataset with limited scale is crucial for improvement of scene classification. Recently, convolutional neural network (CNN) performs impressively in different fields of computer vision and has been used for remote sensing. However, most works focus on the feature maps of the last convolution layer and pay little attention to the benefits of additional layers. In fact, the feature information hidden in different layers has potential for feature discrimination capacity. The most attention of this work is how to explore the potential of multiple layers from a CNN model. Therefore, this paper proposes multi-layers feature fusion based on CNN and designs a fusion module to solve relevant issues of fusion. In this module, firstly, all the feature maps are transformed to match sizes mutually due to infeasible fusion of feature maps with different scales; then, two fusion methods are introduced to integrate feature maps from different layers instead of the last convolution layer only; finally, the fusion of features are delivered to the next layer or classifier as the routine CNN does. The experimental results show that the suggested methods achieve promising performance on public datasets.</description><subject>Artificial neural networks</subject><subject>Classification</subject><subject>Computer vision</subject><subject>Convolution</subject><subject>convolutional neural network</subject><subject>Data mining</subject><subject>Datasets</subject><subject>Encoding</subject><subject>Feature extraction</subject><subject>Feature maps</subject><subject>Modules</subject><subject>Multi-layer feature fusion</subject><subject>Multilayers</subject><subject>Neural networks</subject><subject>Remote sensing</subject><subject>remote sensing image</subject><subject>scene classification</subject><subject>Semantics</subject><subject>Shape</subject><issn>2169-3536</issn><issn>2169-3536</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>ESBDL</sourceid><sourceid>RIE</sourceid><sourceid>DOA</sourceid><recordid>eNpNUdtu1DAQtRBIVEu_oC-ReM7ie5zHKupCpQUkFp6tWWdceUnjYjug_n3dZlUxL3PRnDOXQ8gVo1vGaP_pehhuDoctp6zf8l5oztQbcsGZ7luhhH77X_yeXOZ8otVMLanugvivy1RCu4dHTLnZIZQlYbNbcohzE30zxPlvnJZSU5iab7ikF1f-xfS78TE1B4czNsMEOQcfHJQz8Afex4LNAecc5rsP5J2HKePl2W_Ir93Nz-FLu__--Xa43rdOUlNarZGPyMEbeZSdAK66IwXuPHBuRuh6VbeWhiqgboRRCi8pM5I7fVSaoRYbcrvyjhFO9iGFe0iPNkKwL4WY7iykEtyEFqRRqDoN0AnpjqNBw10H1HNvGEpXuT6uXA8p_lkwF3uKS6pvyJZLpTSVrP50Q8Ta5VLMOaF_ncqofdbHrvrYZ33sWZ-KulpRARFfEaZeJigTTytvjCc</recordid><startdate>2019</startdate><enddate>2019</enddate><creator>Ma, Chenhui</creator><creator>Mu, Xiaodong</creator><creator>Sha, Dexuan</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>ESBDL</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>7SR</scope><scope>8BQ</scope><scope>8FD</scope><scope>JG9</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>DOA</scope><orcidid>https://orcid.org/0000-0001-9494-0519</orcidid></search><sort><creationdate>2019</creationdate><title>Multi-Layers Feature Fusion of Convolutional Neural Network for Scene Classification of Remote Sensing</title><author>Ma, Chenhui ; Mu, Xiaodong ; Sha, Dexuan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c408t-66e2de2af84b473a257b0a2cfa228da7956954805a0cdad43f401842c6b561e63</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Artificial neural networks</topic><topic>Classification</topic><topic>Computer vision</topic><topic>Convolution</topic><topic>convolutional neural network</topic><topic>Data mining</topic><topic>Datasets</topic><topic>Encoding</topic><topic>Feature extraction</topic><topic>Feature maps</topic><topic>Modules</topic><topic>Multi-layer feature fusion</topic><topic>Multilayers</topic><topic>Neural networks</topic><topic>Remote sensing</topic><topic>remote sensing image</topic><topic>scene classification</topic><topic>Semantics</topic><topic>Shape</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Ma, Chenhui</creatorcontrib><creatorcontrib>Mu, Xiaodong</creatorcontrib><creatorcontrib>Sha, Dexuan</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE Open Access Journals</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Engineered Materials Abstracts</collection><collection>METADEX</collection><collection>Technology Research Database</collection><collection>Materials Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>DOAJ Directory of Open Access Journals</collection><jtitle>IEEE access</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Ma, Chenhui</au><au>Mu, Xiaodong</au><au>Sha, Dexuan</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Multi-Layers Feature Fusion of Convolutional Neural Network for Scene Classification of Remote Sensing</atitle><jtitle>IEEE access</jtitle><stitle>Access</stitle><date>2019</date><risdate>2019</risdate><volume>7</volume><spage>121685</spage><epage>121694</epage><pages>121685-121694</pages><issn>2169-3536</issn><eissn>2169-3536</eissn><coden>IAECCG</coden><abstract>Remote sensing scene classification is still a challenging task in remote sensing applications. How to effectively extract features from a dataset with limited scale is crucial for improvement of scene classification. Recently, convolutional neural network (CNN) performs impressively in different fields of computer vision and has been used for remote sensing. However, most works focus on the feature maps of the last convolution layer and pay little attention to the benefits of additional layers. In fact, the feature information hidden in different layers has potential for feature discrimination capacity. The most attention of this work is how to explore the potential of multiple layers from a CNN model. Therefore, this paper proposes multi-layers feature fusion based on CNN and designs a fusion module to solve relevant issues of fusion. In this module, firstly, all the feature maps are transformed to match sizes mutually due to infeasible fusion of feature maps with different scales; then, two fusion methods are introduced to integrate feature maps from different layers instead of the last convolution layer only; finally, the fusion of features are delivered to the next layer or classifier as the routine CNN does. The experimental results show that the suggested methods achieve promising performance on public datasets.</abstract><cop>Piscataway</cop><pub>IEEE</pub><doi>10.1109/ACCESS.2019.2936215</doi><tpages>10</tpages><orcidid>https://orcid.org/0000-0001-9494-0519</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 2169-3536
ispartof IEEE access, 2019, Vol.7, p.121685-121694
issn 2169-3536
2169-3536
language eng
recordid cdi_ieee_primary_8805301
source IEEE Open Access Journals; DOAJ Directory of Open Access Journals; Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals
subjects Artificial neural networks
Classification
Computer vision
Convolution
convolutional neural network
Data mining
Datasets
Encoding
Feature extraction
Feature maps
Modules
Multi-layer feature fusion
Multilayers
Neural networks
Remote sensing
remote sensing image
scene classification
Semantics
Shape
title Multi-Layers Feature Fusion of Convolutional Neural Network for Scene Classification of Remote Sensing
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-13T02%3A31%3A54IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_ieee_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Multi-Layers%20Feature%20Fusion%20of%20Convolutional%20Neural%20Network%20for%20Scene%20Classification%20of%20Remote%20Sensing&rft.jtitle=IEEE%20access&rft.au=Ma,%20Chenhui&rft.date=2019&rft.volume=7&rft.spage=121685&rft.epage=121694&rft.pages=121685-121694&rft.issn=2169-3536&rft.eissn=2169-3536&rft.coden=IAECCG&rft_id=info:doi/10.1109/ACCESS.2019.2936215&rft_dat=%3Cproquest_ieee_%3E2455604135%3C/proquest_ieee_%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2455604135&rft_id=info:pmid/&rft_ieee_id=8805301&rft_doaj_id=oai_doaj_org_article_a485e576aa734cbd8e82c7a0f2f81e4c&rfr_iscdi=true