Feature Extraction for Classification of Hyperspectral and LiDAR Data Using Patch-to-Patch CNN
Multisensor fusion is of great importance in Earth observation related applications. For instance, hyperspectral images (HSIs) provide wealthy spectral information while light detection and ranging (LiDAR) data provide elevation information, and using HSI and LiDAR data together can achieve better c...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on cybernetics 2020-01, Vol.50 (1), p.100-111 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 111 |
---|---|
container_issue | 1 |
container_start_page | 100 |
container_title | IEEE transactions on cybernetics |
container_volume | 50 |
creator | Zhang, Mengmeng Li, Wei Du, Qian Gao, Lianru Zhang, Bing |
description | Multisensor fusion is of great importance in Earth observation related applications. For instance, hyperspectral images (HSIs) provide wealthy spectral information while light detection and ranging (LiDAR) data provide elevation information, and using HSI and LiDAR data together can achieve better classification performance. In this paper, an unsupervised feature extraction framework, named as patch-to-patch convolutional neural network (PToP CNN), is proposed for collaborative classification of hyperspectral and LiDAR data. More specific, a three-tower PToP mapping is first developed to seek an accurate representation from HSI to LiDAR data, aiming at merging multiscale features between two different sources. Then, by integrating hidden layers of the designed PToP CNN, extracted features are expected to possess deeply fused characteristics. Accordingly, features from different hidden layers are concatenated into a stacked vector and fed into three fully connected layers. To verify the effectiveness of the proposed classification framework, experiments are executed on two benchmark remote sensing data sets. The experimental results demonstrate that the proposed method provides superior performance when compared with some state-of-the-art classifiers, such as two-branch CNN and context CNN. |
doi_str_mv | 10.1109/TCYB.2018.2864670 |
format | Article |
fullrecord | <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_ieee_primary_8467496</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>8467496</ieee_id><sourcerecordid>2308297896</sourcerecordid><originalsourceid>FETCH-LOGICAL-c415t-59187d8dddcf09e6624617b05183df38ea916d7737ccc5c073c25cb7a0ce22993</originalsourceid><addsrcrecordid>eNpdkE9LwzAYh4MoTqYfQAQJePHSmaTNv6N2mxPGFNGDF0OWptrRNTNpwX17Mzd38L3k5c3zexMeAM4xGmCM5M1L_nY3IAiLAREsYxwdgBOCmUgI4fRw3zPeA2chLFAsEUdSHINeikhKMWUn4H1sddt5C0ffrdemrVwDS-dhXusQqrIy-nfkSjhZr6wPK2siV0PdFHBaDW-f4VC3Gr6GqvmAT7o1n0nrkt8G5rPZKTgqdR3s2e7sg9fx6CWfJNPH-4f8dpqYDNM2oRILXoiiKEyJpGWMZAzzOaJYpEWZCqslZgXnKTfGUIN4agg1c66RsYRImfbB9XbvyruvzoZWLatgbF3rxrouKIJj0YxyFNGrf-jCdb6Jv1MkRYJILiSLFN5SxrsQvC3VyldL7dcKI7Xxrzb-1ca_2vmPmcvd5m6-tMU-8Wc7AhdboLLW7q9FDGfxyR_CB4ck</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2308297896</pqid></control><display><type>article</type><title>Feature Extraction for Classification of Hyperspectral and LiDAR Data Using Patch-to-Patch CNN</title><source>IEEE Electronic Library (IEL)</source><creator>Zhang, Mengmeng ; Li, Wei ; Du, Qian ; Gao, Lianru ; Zhang, Bing</creator><creatorcontrib>Zhang, Mengmeng ; Li, Wei ; Du, Qian ; Gao, Lianru ; Zhang, Bing</creatorcontrib><description>Multisensor fusion is of great importance in Earth observation related applications. For instance, hyperspectral images (HSIs) provide wealthy spectral information while light detection and ranging (LiDAR) data provide elevation information, and using HSI and LiDAR data together can achieve better classification performance. In this paper, an unsupervised feature extraction framework, named as patch-to-patch convolutional neural network (PToP CNN), is proposed for collaborative classification of hyperspectral and LiDAR data. More specific, a three-tower PToP mapping is first developed to seek an accurate representation from HSI to LiDAR data, aiming at merging multiscale features between two different sources. Then, by integrating hidden layers of the designed PToP CNN, extracted features are expected to possess deeply fused characteristics. Accordingly, features from different hidden layers are concatenated into a stacked vector and fed into three fully connected layers. To verify the effectiveness of the proposed classification framework, experiments are executed on two benchmark remote sensing data sets. The experimental results demonstrate that the proposed method provides superior performance when compared with some state-of-the-art classifiers, such as two-branch CNN and context CNN.</description><identifier>ISSN: 2168-2267</identifier><identifier>EISSN: 2168-2275</identifier><identifier>DOI: 10.1109/TCYB.2018.2864670</identifier><identifier>PMID: 30235156</identifier><identifier>CODEN: ITCEB8</identifier><language>eng</language><publisher>United States: IEEE</publisher><subject>Artificial neural networks ; Classification ; Computer architecture ; Decoding ; Deep convolutional neural network (CNN) ; Feature extraction ; hyperspectral image (HSI) classification ; Hyperspectral imaging ; Image detection ; Laser radar ; Lidar ; Mapping ; Multisensor fusion ; Remote sensing ; Simulation ; Task analysis</subject><ispartof>IEEE transactions on cybernetics, 2020-01, Vol.50 (1), p.100-111</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2020</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c415t-59187d8dddcf09e6624617b05183df38ea916d7737ccc5c073c25cb7a0ce22993</citedby><cites>FETCH-LOGICAL-c415t-59187d8dddcf09e6624617b05183df38ea916d7737ccc5c073c25cb7a0ce22993</cites><orcidid>0000-0002-5724-9785 ; 0000-0003-3888-8124 ; 0000-0001-8354-7500 ; 0000-0001-7015-7335</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/8467496$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,27924,27925,54758</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/8467496$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/30235156$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Zhang, Mengmeng</creatorcontrib><creatorcontrib>Li, Wei</creatorcontrib><creatorcontrib>Du, Qian</creatorcontrib><creatorcontrib>Gao, Lianru</creatorcontrib><creatorcontrib>Zhang, Bing</creatorcontrib><title>Feature Extraction for Classification of Hyperspectral and LiDAR Data Using Patch-to-Patch CNN</title><title>IEEE transactions on cybernetics</title><addtitle>TCYB</addtitle><addtitle>IEEE Trans Cybern</addtitle><description>Multisensor fusion is of great importance in Earth observation related applications. For instance, hyperspectral images (HSIs) provide wealthy spectral information while light detection and ranging (LiDAR) data provide elevation information, and using HSI and LiDAR data together can achieve better classification performance. In this paper, an unsupervised feature extraction framework, named as patch-to-patch convolutional neural network (PToP CNN), is proposed for collaborative classification of hyperspectral and LiDAR data. More specific, a three-tower PToP mapping is first developed to seek an accurate representation from HSI to LiDAR data, aiming at merging multiscale features between two different sources. Then, by integrating hidden layers of the designed PToP CNN, extracted features are expected to possess deeply fused characteristics. Accordingly, features from different hidden layers are concatenated into a stacked vector and fed into three fully connected layers. To verify the effectiveness of the proposed classification framework, experiments are executed on two benchmark remote sensing data sets. The experimental results demonstrate that the proposed method provides superior performance when compared with some state-of-the-art classifiers, such as two-branch CNN and context CNN.</description><subject>Artificial neural networks</subject><subject>Classification</subject><subject>Computer architecture</subject><subject>Decoding</subject><subject>Deep convolutional neural network (CNN)</subject><subject>Feature extraction</subject><subject>hyperspectral image (HSI) classification</subject><subject>Hyperspectral imaging</subject><subject>Image detection</subject><subject>Laser radar</subject><subject>Lidar</subject><subject>Mapping</subject><subject>Multisensor fusion</subject><subject>Remote sensing</subject><subject>Simulation</subject><subject>Task analysis</subject><issn>2168-2267</issn><issn>2168-2275</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpdkE9LwzAYh4MoTqYfQAQJePHSmaTNv6N2mxPGFNGDF0OWptrRNTNpwX17Mzd38L3k5c3zexMeAM4xGmCM5M1L_nY3IAiLAREsYxwdgBOCmUgI4fRw3zPeA2chLFAsEUdSHINeikhKMWUn4H1sddt5C0ffrdemrVwDS-dhXusQqrIy-nfkSjhZr6wPK2siV0PdFHBaDW-f4VC3Gr6GqvmAT7o1n0nrkt8G5rPZKTgqdR3s2e7sg9fx6CWfJNPH-4f8dpqYDNM2oRILXoiiKEyJpGWMZAzzOaJYpEWZCqslZgXnKTfGUIN4agg1c66RsYRImfbB9XbvyruvzoZWLatgbF3rxrouKIJj0YxyFNGrf-jCdb6Jv1MkRYJILiSLFN5SxrsQvC3VyldL7dcKI7Xxrzb-1ca_2vmPmcvd5m6-tMU-8Wc7AhdboLLW7q9FDGfxyR_CB4ck</recordid><startdate>20200101</startdate><enddate>20200101</enddate><creator>Zhang, Mengmeng</creator><creator>Li, Wei</creator><creator>Du, Qian</creator><creator>Gao, Lianru</creator><creator>Zhang, Bing</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>7TB</scope><scope>8FD</scope><scope>F28</scope><scope>FR3</scope><scope>H8D</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0002-5724-9785</orcidid><orcidid>https://orcid.org/0000-0003-3888-8124</orcidid><orcidid>https://orcid.org/0000-0001-8354-7500</orcidid><orcidid>https://orcid.org/0000-0001-7015-7335</orcidid></search><sort><creationdate>20200101</creationdate><title>Feature Extraction for Classification of Hyperspectral and LiDAR Data Using Patch-to-Patch CNN</title><author>Zhang, Mengmeng ; Li, Wei ; Du, Qian ; Gao, Lianru ; Zhang, Bing</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c415t-59187d8dddcf09e6624617b05183df38ea916d7737ccc5c073c25cb7a0ce22993</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Artificial neural networks</topic><topic>Classification</topic><topic>Computer architecture</topic><topic>Decoding</topic><topic>Deep convolutional neural network (CNN)</topic><topic>Feature extraction</topic><topic>hyperspectral image (HSI) classification</topic><topic>Hyperspectral imaging</topic><topic>Image detection</topic><topic>Laser radar</topic><topic>Lidar</topic><topic>Mapping</topic><topic>Multisensor fusion</topic><topic>Remote sensing</topic><topic>Simulation</topic><topic>Task analysis</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Zhang, Mengmeng</creatorcontrib><creatorcontrib>Li, Wei</creatorcontrib><creatorcontrib>Du, Qian</creatorcontrib><creatorcontrib>Gao, Lianru</creatorcontrib><creatorcontrib>Zhang, Bing</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Mechanical & Transportation Engineering Abstracts</collection><collection>Technology Research Database</collection><collection>ANTE: Abstracts in New Technology & Engineering</collection><collection>Engineering Research Database</collection><collection>Aerospace Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>MEDLINE - Academic</collection><jtitle>IEEE transactions on cybernetics</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Zhang, Mengmeng</au><au>Li, Wei</au><au>Du, Qian</au><au>Gao, Lianru</au><au>Zhang, Bing</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Feature Extraction for Classification of Hyperspectral and LiDAR Data Using Patch-to-Patch CNN</atitle><jtitle>IEEE transactions on cybernetics</jtitle><stitle>TCYB</stitle><addtitle>IEEE Trans Cybern</addtitle><date>2020-01-01</date><risdate>2020</risdate><volume>50</volume><issue>1</issue><spage>100</spage><epage>111</epage><pages>100-111</pages><issn>2168-2267</issn><eissn>2168-2275</eissn><coden>ITCEB8</coden><abstract>Multisensor fusion is of great importance in Earth observation related applications. For instance, hyperspectral images (HSIs) provide wealthy spectral information while light detection and ranging (LiDAR) data provide elevation information, and using HSI and LiDAR data together can achieve better classification performance. In this paper, an unsupervised feature extraction framework, named as patch-to-patch convolutional neural network (PToP CNN), is proposed for collaborative classification of hyperspectral and LiDAR data. More specific, a three-tower PToP mapping is first developed to seek an accurate representation from HSI to LiDAR data, aiming at merging multiscale features between two different sources. Then, by integrating hidden layers of the designed PToP CNN, extracted features are expected to possess deeply fused characteristics. Accordingly, features from different hidden layers are concatenated into a stacked vector and fed into three fully connected layers. To verify the effectiveness of the proposed classification framework, experiments are executed on two benchmark remote sensing data sets. The experimental results demonstrate that the proposed method provides superior performance when compared with some state-of-the-art classifiers, such as two-branch CNN and context CNN.</abstract><cop>United States</cop><pub>IEEE</pub><pmid>30235156</pmid><doi>10.1109/TCYB.2018.2864670</doi><tpages>12</tpages><orcidid>https://orcid.org/0000-0002-5724-9785</orcidid><orcidid>https://orcid.org/0000-0003-3888-8124</orcidid><orcidid>https://orcid.org/0000-0001-8354-7500</orcidid><orcidid>https://orcid.org/0000-0001-7015-7335</orcidid></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 2168-2267 |
ispartof | IEEE transactions on cybernetics, 2020-01, Vol.50 (1), p.100-111 |
issn | 2168-2267 2168-2275 |
language | eng |
recordid | cdi_ieee_primary_8467496 |
source | IEEE Electronic Library (IEL) |
subjects | Artificial neural networks Classification Computer architecture Decoding Deep convolutional neural network (CNN) Feature extraction hyperspectral image (HSI) classification Hyperspectral imaging Image detection Laser radar Lidar Mapping Multisensor fusion Remote sensing Simulation Task analysis |
title | Feature Extraction for Classification of Hyperspectral and LiDAR Data Using Patch-to-Patch CNN |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-20T06%3A28%3A14IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Feature%20Extraction%20for%20Classification%20of%20Hyperspectral%20and%20LiDAR%20Data%20Using%20Patch-to-Patch%20CNN&rft.jtitle=IEEE%20transactions%20on%20cybernetics&rft.au=Zhang,%20Mengmeng&rft.date=2020-01-01&rft.volume=50&rft.issue=1&rft.spage=100&rft.epage=111&rft.pages=100-111&rft.issn=2168-2267&rft.eissn=2168-2275&rft.coden=ITCEB8&rft_id=info:doi/10.1109/TCYB.2018.2864670&rft_dat=%3Cproquest_RIE%3E2308297896%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2308297896&rft_id=info:pmid/30235156&rft_ieee_id=8467496&rfr_iscdi=true |