Beyond the Visible: Jointly Attending to Spectral and Spatial Dimensions with HSI-Diffusion for the FINCH Spacecraft
Satellite remote sensing missions have gained popularity over the past fifteen years due to their ability to cover large swaths of land at regular intervals, making them ideal for monitoring environmental trends. The FINCH mission, a 3U+ CubeSat equipped with a hyperspectral camera, aims to monitor...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , , , , , , , , , , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Vyse, Ian Dagli, Rishit Chadha, Dav Vrat Ma, John P Chen, Hector Ruparelia, Isha Seran, Prithvi Xie, Matthew Aamer, Eesa Armstrong, Aidan Black, Naveen Borstein, Ben Caldwell, Kevin Dahanaggamaarachchi, Orrin Dai, Joe Fatima, Abeer Lu, Stephanie Michet, Maxime Paul, Anoushka Po, Carrie Ann Prakash, Shivesh Prosser, Noa Roy, Riddhiman Shinjo, Mirai Shofman, Iliya Silayan, Coby Sox-Harris, Reid Zheng, Shuhan Nguyen, Khang |
description | Satellite remote sensing missions have gained popularity over the past
fifteen years due to their ability to cover large swaths of land at regular
intervals, making them ideal for monitoring environmental trends. The FINCH
mission, a 3U+ CubeSat equipped with a hyperspectral camera, aims to monitor
crop residue cover in agricultural fields. Although hyperspectral imaging
captures both spectral and spatial information, it is prone to various types of
noise, including random noise, stripe noise, and dead pixels. Effective
denoising of these images is crucial for downstream scientific tasks.
Traditional methods, including hand-crafted techniques encoding strong priors,
learned 2D image denoising methods applied across different hyperspectral
bands, or diffusion generative models applied independently on bands, often
struggle with varying noise strengths across spectral bands, leading to
significant spectral distortion. This paper presents a novel approach to
hyperspectral image denoising using latent diffusion models that integrate
spatial and spectral information. We particularly do so by building a 3D
diffusion model and presenting a 3-stage training approach on real and
synthetically crafted datasets. The proposed method preserves image structure
while reducing noise. Evaluations on both popular hyperspectral denoising
datasets and synthetically crafted datasets for the FINCH mission demonstrate
the effectiveness of this approach. |
doi_str_mv | 10.48550/arxiv.2406.10724 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2406_10724</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2406_10724</sourcerecordid><originalsourceid>FETCH-LOGICAL-a674-1c29bc889d447d4b8faa31341dbbc905780bec080e709d8c3fe624466ad148da3</originalsourceid><addsrcrecordid>eNotj81OhDAcxHvxYFYfwJN9AbCFAsXbyrouZqMHNl7Jv19uE7aQUj94exfW00wmM5P8ELqjJGY8y8gD-F_7HSeM5DElRcKuUXjSU-8UDkeNP-xoRacf8WtvXegmvA5BO2XdJw49bgYtg4cOw7neDBDs2W_sSbvR9m7EPzYc8a6po4015mvOsOn9cryt36rdvJFaejDhBl0Z6EZ9-68rdNg-H6pdtH9_qav1PoK8YBGVSSkk56VirFBMcAOQ0pRRJYQsSVZwIrQknOiClIrL1Og8YSzPQVHGFaQrdH-5XbDbwdsT-Kmd8dsFP_0DrXdVzg</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Beyond the Visible: Jointly Attending to Spectral and Spatial Dimensions with HSI-Diffusion for the FINCH Spacecraft</title><source>arXiv.org</source><creator>Vyse, Ian ; Dagli, Rishit ; Chadha, Dav Vrat ; Ma, John P ; Chen, Hector ; Ruparelia, Isha ; Seran, Prithvi ; Xie, Matthew ; Aamer, Eesa ; Armstrong, Aidan ; Black, Naveen ; Borstein, Ben ; Caldwell, Kevin ; Dahanaggamaarachchi, Orrin ; Dai, Joe ; Fatima, Abeer ; Lu, Stephanie ; Michet, Maxime ; Paul, Anoushka ; Po, Carrie Ann ; Prakash, Shivesh ; Prosser, Noa ; Roy, Riddhiman ; Shinjo, Mirai ; Shofman, Iliya ; Silayan, Coby ; Sox-Harris, Reid ; Zheng, Shuhan ; Nguyen, Khang</creator><creatorcontrib>Vyse, Ian ; Dagli, Rishit ; Chadha, Dav Vrat ; Ma, John P ; Chen, Hector ; Ruparelia, Isha ; Seran, Prithvi ; Xie, Matthew ; Aamer, Eesa ; Armstrong, Aidan ; Black, Naveen ; Borstein, Ben ; Caldwell, Kevin ; Dahanaggamaarachchi, Orrin ; Dai, Joe ; Fatima, Abeer ; Lu, Stephanie ; Michet, Maxime ; Paul, Anoushka ; Po, Carrie Ann ; Prakash, Shivesh ; Prosser, Noa ; Roy, Riddhiman ; Shinjo, Mirai ; Shofman, Iliya ; Silayan, Coby ; Sox-Harris, Reid ; Zheng, Shuhan ; Nguyen, Khang</creatorcontrib><description>Satellite remote sensing missions have gained popularity over the past
fifteen years due to their ability to cover large swaths of land at regular
intervals, making them ideal for monitoring environmental trends. The FINCH
mission, a 3U+ CubeSat equipped with a hyperspectral camera, aims to monitor
crop residue cover in agricultural fields. Although hyperspectral imaging
captures both spectral and spatial information, it is prone to various types of
noise, including random noise, stripe noise, and dead pixels. Effective
denoising of these images is crucial for downstream scientific tasks.
Traditional methods, including hand-crafted techniques encoding strong priors,
learned 2D image denoising methods applied across different hyperspectral
bands, or diffusion generative models applied independently on bands, often
struggle with varying noise strengths across spectral bands, leading to
significant spectral distortion. This paper presents a novel approach to
hyperspectral image denoising using latent diffusion models that integrate
spatial and spectral information. We particularly do so by building a 3D
diffusion model and presenting a 3-stage training approach on real and
synthetically crafted datasets. The proposed method preserves image structure
while reducing noise. Evaluations on both popular hyperspectral denoising
datasets and synthetically crafted datasets for the FINCH mission demonstrate
the effectiveness of this approach.</description><identifier>DOI: 10.48550/arxiv.2406.10724</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Learning</subject><creationdate>2024-06</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2406.10724$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2406.10724$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Vyse, Ian</creatorcontrib><creatorcontrib>Dagli, Rishit</creatorcontrib><creatorcontrib>Chadha, Dav Vrat</creatorcontrib><creatorcontrib>Ma, John P</creatorcontrib><creatorcontrib>Chen, Hector</creatorcontrib><creatorcontrib>Ruparelia, Isha</creatorcontrib><creatorcontrib>Seran, Prithvi</creatorcontrib><creatorcontrib>Xie, Matthew</creatorcontrib><creatorcontrib>Aamer, Eesa</creatorcontrib><creatorcontrib>Armstrong, Aidan</creatorcontrib><creatorcontrib>Black, Naveen</creatorcontrib><creatorcontrib>Borstein, Ben</creatorcontrib><creatorcontrib>Caldwell, Kevin</creatorcontrib><creatorcontrib>Dahanaggamaarachchi, Orrin</creatorcontrib><creatorcontrib>Dai, Joe</creatorcontrib><creatorcontrib>Fatima, Abeer</creatorcontrib><creatorcontrib>Lu, Stephanie</creatorcontrib><creatorcontrib>Michet, Maxime</creatorcontrib><creatorcontrib>Paul, Anoushka</creatorcontrib><creatorcontrib>Po, Carrie Ann</creatorcontrib><creatorcontrib>Prakash, Shivesh</creatorcontrib><creatorcontrib>Prosser, Noa</creatorcontrib><creatorcontrib>Roy, Riddhiman</creatorcontrib><creatorcontrib>Shinjo, Mirai</creatorcontrib><creatorcontrib>Shofman, Iliya</creatorcontrib><creatorcontrib>Silayan, Coby</creatorcontrib><creatorcontrib>Sox-Harris, Reid</creatorcontrib><creatorcontrib>Zheng, Shuhan</creatorcontrib><creatorcontrib>Nguyen, Khang</creatorcontrib><title>Beyond the Visible: Jointly Attending to Spectral and Spatial Dimensions with HSI-Diffusion for the FINCH Spacecraft</title><description>Satellite remote sensing missions have gained popularity over the past
fifteen years due to their ability to cover large swaths of land at regular
intervals, making them ideal for monitoring environmental trends. The FINCH
mission, a 3U+ CubeSat equipped with a hyperspectral camera, aims to monitor
crop residue cover in agricultural fields. Although hyperspectral imaging
captures both spectral and spatial information, it is prone to various types of
noise, including random noise, stripe noise, and dead pixels. Effective
denoising of these images is crucial for downstream scientific tasks.
Traditional methods, including hand-crafted techniques encoding strong priors,
learned 2D image denoising methods applied across different hyperspectral
bands, or diffusion generative models applied independently on bands, often
struggle with varying noise strengths across spectral bands, leading to
significant spectral distortion. This paper presents a novel approach to
hyperspectral image denoising using latent diffusion models that integrate
spatial and spectral information. We particularly do so by building a 3D
diffusion model and presenting a 3-stage training approach on real and
synthetically crafted datasets. The proposed method preserves image structure
while reducing noise. Evaluations on both popular hyperspectral denoising
datasets and synthetically crafted datasets for the FINCH mission demonstrate
the effectiveness of this approach.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj81OhDAcxHvxYFYfwJN9AbCFAsXbyrouZqMHNl7Jv19uE7aQUj94exfW00wmM5P8ELqjJGY8y8gD-F_7HSeM5DElRcKuUXjSU-8UDkeNP-xoRacf8WtvXegmvA5BO2XdJw49bgYtg4cOw7neDBDs2W_sSbvR9m7EPzYc8a6po4015mvOsOn9cryt36rdvJFaejDhBl0Z6EZ9-68rdNg-H6pdtH9_qav1PoK8YBGVSSkk56VirFBMcAOQ0pRRJYQsSVZwIrQknOiClIrL1Og8YSzPQVHGFaQrdH-5XbDbwdsT-Kmd8dsFP_0DrXdVzg</recordid><startdate>20240615</startdate><enddate>20240615</enddate><creator>Vyse, Ian</creator><creator>Dagli, Rishit</creator><creator>Chadha, Dav Vrat</creator><creator>Ma, John P</creator><creator>Chen, Hector</creator><creator>Ruparelia, Isha</creator><creator>Seran, Prithvi</creator><creator>Xie, Matthew</creator><creator>Aamer, Eesa</creator><creator>Armstrong, Aidan</creator><creator>Black, Naveen</creator><creator>Borstein, Ben</creator><creator>Caldwell, Kevin</creator><creator>Dahanaggamaarachchi, Orrin</creator><creator>Dai, Joe</creator><creator>Fatima, Abeer</creator><creator>Lu, Stephanie</creator><creator>Michet, Maxime</creator><creator>Paul, Anoushka</creator><creator>Po, Carrie Ann</creator><creator>Prakash, Shivesh</creator><creator>Prosser, Noa</creator><creator>Roy, Riddhiman</creator><creator>Shinjo, Mirai</creator><creator>Shofman, Iliya</creator><creator>Silayan, Coby</creator><creator>Sox-Harris, Reid</creator><creator>Zheng, Shuhan</creator><creator>Nguyen, Khang</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240615</creationdate><title>Beyond the Visible: Jointly Attending to Spectral and Spatial Dimensions with HSI-Diffusion for the FINCH Spacecraft</title><author>Vyse, Ian ; Dagli, Rishit ; Chadha, Dav Vrat ; Ma, John P ; Chen, Hector ; Ruparelia, Isha ; Seran, Prithvi ; Xie, Matthew ; Aamer, Eesa ; Armstrong, Aidan ; Black, Naveen ; Borstein, Ben ; Caldwell, Kevin ; Dahanaggamaarachchi, Orrin ; Dai, Joe ; Fatima, Abeer ; Lu, Stephanie ; Michet, Maxime ; Paul, Anoushka ; Po, Carrie Ann ; Prakash, Shivesh ; Prosser, Noa ; Roy, Riddhiman ; Shinjo, Mirai ; Shofman, Iliya ; Silayan, Coby ; Sox-Harris, Reid ; Zheng, Shuhan ; Nguyen, Khang</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a674-1c29bc889d447d4b8faa31341dbbc905780bec080e709d8c3fe624466ad148da3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Vyse, Ian</creatorcontrib><creatorcontrib>Dagli, Rishit</creatorcontrib><creatorcontrib>Chadha, Dav Vrat</creatorcontrib><creatorcontrib>Ma, John P</creatorcontrib><creatorcontrib>Chen, Hector</creatorcontrib><creatorcontrib>Ruparelia, Isha</creatorcontrib><creatorcontrib>Seran, Prithvi</creatorcontrib><creatorcontrib>Xie, Matthew</creatorcontrib><creatorcontrib>Aamer, Eesa</creatorcontrib><creatorcontrib>Armstrong, Aidan</creatorcontrib><creatorcontrib>Black, Naveen</creatorcontrib><creatorcontrib>Borstein, Ben</creatorcontrib><creatorcontrib>Caldwell, Kevin</creatorcontrib><creatorcontrib>Dahanaggamaarachchi, Orrin</creatorcontrib><creatorcontrib>Dai, Joe</creatorcontrib><creatorcontrib>Fatima, Abeer</creatorcontrib><creatorcontrib>Lu, Stephanie</creatorcontrib><creatorcontrib>Michet, Maxime</creatorcontrib><creatorcontrib>Paul, Anoushka</creatorcontrib><creatorcontrib>Po, Carrie Ann</creatorcontrib><creatorcontrib>Prakash, Shivesh</creatorcontrib><creatorcontrib>Prosser, Noa</creatorcontrib><creatorcontrib>Roy, Riddhiman</creatorcontrib><creatorcontrib>Shinjo, Mirai</creatorcontrib><creatorcontrib>Shofman, Iliya</creatorcontrib><creatorcontrib>Silayan, Coby</creatorcontrib><creatorcontrib>Sox-Harris, Reid</creatorcontrib><creatorcontrib>Zheng, Shuhan</creatorcontrib><creatorcontrib>Nguyen, Khang</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Vyse, Ian</au><au>Dagli, Rishit</au><au>Chadha, Dav Vrat</au><au>Ma, John P</au><au>Chen, Hector</au><au>Ruparelia, Isha</au><au>Seran, Prithvi</au><au>Xie, Matthew</au><au>Aamer, Eesa</au><au>Armstrong, Aidan</au><au>Black, Naveen</au><au>Borstein, Ben</au><au>Caldwell, Kevin</au><au>Dahanaggamaarachchi, Orrin</au><au>Dai, Joe</au><au>Fatima, Abeer</au><au>Lu, Stephanie</au><au>Michet, Maxime</au><au>Paul, Anoushka</au><au>Po, Carrie Ann</au><au>Prakash, Shivesh</au><au>Prosser, Noa</au><au>Roy, Riddhiman</au><au>Shinjo, Mirai</au><au>Shofman, Iliya</au><au>Silayan, Coby</au><au>Sox-Harris, Reid</au><au>Zheng, Shuhan</au><au>Nguyen, Khang</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Beyond the Visible: Jointly Attending to Spectral and Spatial Dimensions with HSI-Diffusion for the FINCH Spacecraft</atitle><date>2024-06-15</date><risdate>2024</risdate><abstract>Satellite remote sensing missions have gained popularity over the past
fifteen years due to their ability to cover large swaths of land at regular
intervals, making them ideal for monitoring environmental trends. The FINCH
mission, a 3U+ CubeSat equipped with a hyperspectral camera, aims to monitor
crop residue cover in agricultural fields. Although hyperspectral imaging
captures both spectral and spatial information, it is prone to various types of
noise, including random noise, stripe noise, and dead pixels. Effective
denoising of these images is crucial for downstream scientific tasks.
Traditional methods, including hand-crafted techniques encoding strong priors,
learned 2D image denoising methods applied across different hyperspectral
bands, or diffusion generative models applied independently on bands, often
struggle with varying noise strengths across spectral bands, leading to
significant spectral distortion. This paper presents a novel approach to
hyperspectral image denoising using latent diffusion models that integrate
spatial and spectral information. We particularly do so by building a 3D
diffusion model and presenting a 3-stage training approach on real and
synthetically crafted datasets. The proposed method preserves image structure
while reducing noise. Evaluations on both popular hyperspectral denoising
datasets and synthetically crafted datasets for the FINCH mission demonstrate
the effectiveness of this approach.</abstract><doi>10.48550/arxiv.2406.10724</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2406.10724 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2406_10724 |
source | arXiv.org |
subjects | Computer Science - Computer Vision and Pattern Recognition Computer Science - Learning |
title | Beyond the Visible: Jointly Attending to Spectral and Spatial Dimensions with HSI-Diffusion for the FINCH Spacecraft |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-08T12%3A39%3A39IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Beyond%20the%20Visible:%20Jointly%20Attending%20to%20Spectral%20and%20Spatial%20Dimensions%20with%20HSI-Diffusion%20for%20the%20FINCH%20Spacecraft&rft.au=Vyse,%20Ian&rft.date=2024-06-15&rft_id=info:doi/10.48550/arxiv.2406.10724&rft_dat=%3Carxiv_GOX%3E2406_10724%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |