GAUSS: Guided Encoder-Decoder Architecture for Hyperspectral Unmixing with Spatial Smoothness

In recent hyperspectral unmixing (HU) literature, the application of deep learning (DL) has become more prominent, especially with the autoencoder (AE) architecture. We propose a split architecture and use a pseudo-ground truth for abundances to guide the `unmixing network' (UN) optimization. P...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Ranasinghe, Yasiru, Weerasooriya, Kavinga, Godaliyadda, Roshan, Herath, Vijitha, Ekanayake, Parakrama, Jayasundara, Dhananjaya, Ramanayake, Lakshitha, Senarath, Neranjan, Wickramasinghe, Dulantha
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Ranasinghe, Yasiru
Weerasooriya, Kavinga
Godaliyadda, Roshan
Herath, Vijitha
Ekanayake, Parakrama
Jayasundara, Dhananjaya
Ramanayake, Lakshitha
Senarath, Neranjan
Wickramasinghe, Dulantha
description In recent hyperspectral unmixing (HU) literature, the application of deep learning (DL) has become more prominent, especially with the autoencoder (AE) architecture. We propose a split architecture and use a pseudo-ground truth for abundances to guide the `unmixing network' (UN) optimization. Preceding the UN, an `approximation network' (AN) is proposed, which will improve the association between the centre pixel and its neighbourhood. Hence, it will accentuate spatial correlation in the abundances as its output is the input to the UN and the reference for the `mixing network' (MN). In the Guided Encoder-Decoder Architecture for Hyperspectral Unmixing with Spatial Smoothness (GAUSS), we proposed using one-hot encoded abundances as the pseudo-ground truth to guide the UN; computed using the k-means algorithm to exclude the use of prior HU methods. Furthermore, we release the single-layer constraint on MN by introducing the UN generated abundances in contrast to the standard AE for HU. Secondly, we experimented with two modifications on the pre-trained network using the GAUSS method. In GAUSS$_\textit{blind}$, we have concatenated the UN and the MN to back-propagate the reconstruction error gradients to the encoder. Then, in the GAUSS$_\textit{prime}$, abundance results of a signal processing (SP) method with reliable abundance results were used as the pseudo-ground truth with the GAUSS architecture. According to quantitative and graphical results for four experimental datasets, the three architectures either transcended or equated the performance of existing HU algorithms from both DL and SP domains.
doi_str_mv 10.48550/arxiv.2204.07713
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2204_07713</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2204_07713</sourcerecordid><originalsourceid>FETCH-LOGICAL-a673-b9e25635d22cd02ef0511a25cf06f3e493f95eae8a98001ea5f76b186796c2043</originalsourceid><addsrcrecordid>eNotz71OwzAYhWEvDKhwAUz4BhL8E9sxW1RKilSJIe2IItf-TCw1P3JSaO-eEJiO9A5HehB6oCTNciHIk4mX8JUyRrKUKEX5Lfooi0NVPePyHBw4vOls7yAmL7AsLqJtwgR2OkfAvo94ex0gjsNcojnhQ9eGS-g-8XeYGlwNZgpzrdq-n5oOxvEO3XhzGuH-f1do_7rZr7fJ7r18Wxe7xEjFk6MGJiQXjjHrCANPBKWGCeuJ9Bwyzb0WYCA3OieEghFeySPNpdLSzha-Qo9_t4uvHmJoTbzWv856cfIff4dN6A</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>GAUSS: Guided Encoder-Decoder Architecture for Hyperspectral Unmixing with Spatial Smoothness</title><source>arXiv.org</source><creator>Ranasinghe, Yasiru ; Weerasooriya, Kavinga ; Godaliyadda, Roshan ; Herath, Vijitha ; Ekanayake, Parakrama ; Jayasundara, Dhananjaya ; Ramanayake, Lakshitha ; Senarath, Neranjan ; Wickramasinghe, Dulantha</creator><creatorcontrib>Ranasinghe, Yasiru ; Weerasooriya, Kavinga ; Godaliyadda, Roshan ; Herath, Vijitha ; Ekanayake, Parakrama ; Jayasundara, Dhananjaya ; Ramanayake, Lakshitha ; Senarath, Neranjan ; Wickramasinghe, Dulantha</creatorcontrib><description>In recent hyperspectral unmixing (HU) literature, the application of deep learning (DL) has become more prominent, especially with the autoencoder (AE) architecture. We propose a split architecture and use a pseudo-ground truth for abundances to guide the `unmixing network' (UN) optimization. Preceding the UN, an `approximation network' (AN) is proposed, which will improve the association between the centre pixel and its neighbourhood. Hence, it will accentuate spatial correlation in the abundances as its output is the input to the UN and the reference for the `mixing network' (MN). In the Guided Encoder-Decoder Architecture for Hyperspectral Unmixing with Spatial Smoothness (GAUSS), we proposed using one-hot encoded abundances as the pseudo-ground truth to guide the UN; computed using the k-means algorithm to exclude the use of prior HU methods. Furthermore, we release the single-layer constraint on MN by introducing the UN generated abundances in contrast to the standard AE for HU. Secondly, we experimented with two modifications on the pre-trained network using the GAUSS method. In GAUSS$_\textit{blind}$, we have concatenated the UN and the MN to back-propagate the reconstruction error gradients to the encoder. Then, in the GAUSS$_\textit{prime}$, abundance results of a signal processing (SP) method with reliable abundance results were used as the pseudo-ground truth with the GAUSS architecture. According to quantitative and graphical results for four experimental datasets, the three architectures either transcended or equated the performance of existing HU algorithms from both DL and SP domains.</description><identifier>DOI: 10.48550/arxiv.2204.07713</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2022-04</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2204.07713$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2204.07713$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Ranasinghe, Yasiru</creatorcontrib><creatorcontrib>Weerasooriya, Kavinga</creatorcontrib><creatorcontrib>Godaliyadda, Roshan</creatorcontrib><creatorcontrib>Herath, Vijitha</creatorcontrib><creatorcontrib>Ekanayake, Parakrama</creatorcontrib><creatorcontrib>Jayasundara, Dhananjaya</creatorcontrib><creatorcontrib>Ramanayake, Lakshitha</creatorcontrib><creatorcontrib>Senarath, Neranjan</creatorcontrib><creatorcontrib>Wickramasinghe, Dulantha</creatorcontrib><title>GAUSS: Guided Encoder-Decoder Architecture for Hyperspectral Unmixing with Spatial Smoothness</title><description>In recent hyperspectral unmixing (HU) literature, the application of deep learning (DL) has become more prominent, especially with the autoencoder (AE) architecture. We propose a split architecture and use a pseudo-ground truth for abundances to guide the `unmixing network' (UN) optimization. Preceding the UN, an `approximation network' (AN) is proposed, which will improve the association between the centre pixel and its neighbourhood. Hence, it will accentuate spatial correlation in the abundances as its output is the input to the UN and the reference for the `mixing network' (MN). In the Guided Encoder-Decoder Architecture for Hyperspectral Unmixing with Spatial Smoothness (GAUSS), we proposed using one-hot encoded abundances as the pseudo-ground truth to guide the UN; computed using the k-means algorithm to exclude the use of prior HU methods. Furthermore, we release the single-layer constraint on MN by introducing the UN generated abundances in contrast to the standard AE for HU. Secondly, we experimented with two modifications on the pre-trained network using the GAUSS method. In GAUSS$_\textit{blind}$, we have concatenated the UN and the MN to back-propagate the reconstruction error gradients to the encoder. Then, in the GAUSS$_\textit{prime}$, abundance results of a signal processing (SP) method with reliable abundance results were used as the pseudo-ground truth with the GAUSS architecture. According to quantitative and graphical results for four experimental datasets, the three architectures either transcended or equated the performance of existing HU algorithms from both DL and SP domains.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotz71OwzAYhWEvDKhwAUz4BhL8E9sxW1RKilSJIe2IItf-TCw1P3JSaO-eEJiO9A5HehB6oCTNciHIk4mX8JUyRrKUKEX5Lfooi0NVPePyHBw4vOls7yAmL7AsLqJtwgR2OkfAvo94ex0gjsNcojnhQ9eGS-g-8XeYGlwNZgpzrdq-n5oOxvEO3XhzGuH-f1do_7rZr7fJ7r18Wxe7xEjFk6MGJiQXjjHrCANPBKWGCeuJ9Bwyzb0WYCA3OieEghFeySPNpdLSzha-Qo9_t4uvHmJoTbzWv856cfIff4dN6A</recordid><startdate>20220416</startdate><enddate>20220416</enddate><creator>Ranasinghe, Yasiru</creator><creator>Weerasooriya, Kavinga</creator><creator>Godaliyadda, Roshan</creator><creator>Herath, Vijitha</creator><creator>Ekanayake, Parakrama</creator><creator>Jayasundara, Dhananjaya</creator><creator>Ramanayake, Lakshitha</creator><creator>Senarath, Neranjan</creator><creator>Wickramasinghe, Dulantha</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20220416</creationdate><title>GAUSS: Guided Encoder-Decoder Architecture for Hyperspectral Unmixing with Spatial Smoothness</title><author>Ranasinghe, Yasiru ; Weerasooriya, Kavinga ; Godaliyadda, Roshan ; Herath, Vijitha ; Ekanayake, Parakrama ; Jayasundara, Dhananjaya ; Ramanayake, Lakshitha ; Senarath, Neranjan ; Wickramasinghe, Dulantha</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a673-b9e25635d22cd02ef0511a25cf06f3e493f95eae8a98001ea5f76b186796c2043</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Ranasinghe, Yasiru</creatorcontrib><creatorcontrib>Weerasooriya, Kavinga</creatorcontrib><creatorcontrib>Godaliyadda, Roshan</creatorcontrib><creatorcontrib>Herath, Vijitha</creatorcontrib><creatorcontrib>Ekanayake, Parakrama</creatorcontrib><creatorcontrib>Jayasundara, Dhananjaya</creatorcontrib><creatorcontrib>Ramanayake, Lakshitha</creatorcontrib><creatorcontrib>Senarath, Neranjan</creatorcontrib><creatorcontrib>Wickramasinghe, Dulantha</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Ranasinghe, Yasiru</au><au>Weerasooriya, Kavinga</au><au>Godaliyadda, Roshan</au><au>Herath, Vijitha</au><au>Ekanayake, Parakrama</au><au>Jayasundara, Dhananjaya</au><au>Ramanayake, Lakshitha</au><au>Senarath, Neranjan</au><au>Wickramasinghe, Dulantha</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>GAUSS: Guided Encoder-Decoder Architecture for Hyperspectral Unmixing with Spatial Smoothness</atitle><date>2022-04-16</date><risdate>2022</risdate><abstract>In recent hyperspectral unmixing (HU) literature, the application of deep learning (DL) has become more prominent, especially with the autoencoder (AE) architecture. We propose a split architecture and use a pseudo-ground truth for abundances to guide the `unmixing network' (UN) optimization. Preceding the UN, an `approximation network' (AN) is proposed, which will improve the association between the centre pixel and its neighbourhood. Hence, it will accentuate spatial correlation in the abundances as its output is the input to the UN and the reference for the `mixing network' (MN). In the Guided Encoder-Decoder Architecture for Hyperspectral Unmixing with Spatial Smoothness (GAUSS), we proposed using one-hot encoded abundances as the pseudo-ground truth to guide the UN; computed using the k-means algorithm to exclude the use of prior HU methods. Furthermore, we release the single-layer constraint on MN by introducing the UN generated abundances in contrast to the standard AE for HU. Secondly, we experimented with two modifications on the pre-trained network using the GAUSS method. In GAUSS$_\textit{blind}$, we have concatenated the UN and the MN to back-propagate the reconstruction error gradients to the encoder. Then, in the GAUSS$_\textit{prime}$, abundance results of a signal processing (SP) method with reliable abundance results were used as the pseudo-ground truth with the GAUSS architecture. According to quantitative and graphical results for four experimental datasets, the three architectures either transcended or equated the performance of existing HU algorithms from both DL and SP domains.</abstract><doi>10.48550/arxiv.2204.07713</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2204.07713
ispartof
issn
language eng
recordid cdi_arxiv_primary_2204_07713
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
title GAUSS: Guided Encoder-Decoder Architecture for Hyperspectral Unmixing with Spatial Smoothness
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-07T12%3A09%3A40IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=GAUSS:%20Guided%20Encoder-Decoder%20Architecture%20for%20Hyperspectral%20Unmixing%20with%20Spatial%20Smoothness&rft.au=Ranasinghe,%20Yasiru&rft.date=2022-04-16&rft_id=info:doi/10.48550/arxiv.2204.07713&rft_dat=%3Carxiv_GOX%3E2204_07713%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true