FloU-Net: An Optical Flow Network for Multi-modal Self-Supervised Image Registration
Image registration is an essential task in image processing, where the final objective is to geometrically align two or more images. In remote sensing, this process allows comparing, fusing or analyzing data, specially when multi-modal images are used. In addition, multi-modal image registration bec...
Gespeichert in:
Veröffentlicht in: | IEEE geoscience and remote sensing letters 2023-02, p.1-1 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 1 |
---|---|
container_issue | |
container_start_page | 1 |
container_title | IEEE geoscience and remote sensing letters |
container_volume | |
creator | Ibanez, Damian Fernandez-Beltran, Ruben Pla, Filiberto |
description | Image registration is an essential task in image processing, where the final objective is to geometrically align two or more images. In remote sensing, this process allows comparing, fusing or analyzing data, specially when multi-modal images are used. In addition, multi-modal image registration becomes fairly challenging when the images have a significant difference in scale and resolution, together with local small image deformations. For this purpose, this paper presents a novel optical flow-based image registration network, named the FloU-Net, which tries to further exploit inter-sensor synergies by means of deep learning. The proposed method is able to extract spatial information from resolution differences and through an U-Net backbone generate an optical flow field estimation to accurately register small local deformations of multi-modal images in a self-supervised fashion. For instance, the registration between Sentinel-2 (S2) and Sentinel-3 (S3) optical data is not trivial, as there are considerable spectral-spatial differences among their sensors. In this case, the higher spatial resolution of S2 result in S2 data being a convenient reference to spatially improve S3 products, as well as those of the forthcoming Fluorescence Explorer (FLEX) mission, since image registration is the initial requirement to obtain higher data processing level products. To validate our method, we compare the proposed FloU-Net with other state-of-the-art techniques using 21 coupled S2/S3 optical images from different locations of interest across Europe. The comparison is performed through different performance measures. Results show that proposed FloU-Net can outperform the compared methods. The code and dataset are available in https://github.com/ibanezfd/FloU-Net. |
doi_str_mv | 10.1109/LGRS.2023.3249902 |
format | Article |
fullrecord | <record><control><sourceid>ieee_RIE</sourceid><recordid>TN_cdi_ieee_primary_10054383</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10054383</ieee_id><sourcerecordid>10054383</sourcerecordid><originalsourceid>FETCH-ieee_primary_100543833</originalsourceid><addsrcrecordid>eNqFyUkKwjAYQOEsFBwPILjIBVIztNi6E3ECB7AK7iTYvyWampJExdvbhXtXD76H0IDRgDGajDbLQxpwykUgeJgklDdQm0VhRKIkPrdQx7kbpTyM43EbHRfanMgO_ARPH3hfeXWVGtf4xjW-jb3j3Fi8fWqvSGmyeqagc5I-K7Av5SDD61IWgA9QKOet9Mo8eqiZS-2g_2sXDRfz42xFFABcKqtKaT8XRmkUiliIP_sLUww_CQ</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>FloU-Net: An Optical Flow Network for Multi-modal Self-Supervised Image Registration</title><source>IEEE Electronic Library (IEL)</source><creator>Ibanez, Damian ; Fernandez-Beltran, Ruben ; Pla, Filiberto</creator><creatorcontrib>Ibanez, Damian ; Fernandez-Beltran, Ruben ; Pla, Filiberto</creatorcontrib><description>Image registration is an essential task in image processing, where the final objective is to geometrically align two or more images. In remote sensing, this process allows comparing, fusing or analyzing data, specially when multi-modal images are used. In addition, multi-modal image registration becomes fairly challenging when the images have a significant difference in scale and resolution, together with local small image deformations. For this purpose, this paper presents a novel optical flow-based image registration network, named the FloU-Net, which tries to further exploit inter-sensor synergies by means of deep learning. The proposed method is able to extract spatial information from resolution differences and through an U-Net backbone generate an optical flow field estimation to accurately register small local deformations of multi-modal images in a self-supervised fashion. For instance, the registration between Sentinel-2 (S2) and Sentinel-3 (S3) optical data is not trivial, as there are considerable spectral-spatial differences among their sensors. In this case, the higher spatial resolution of S2 result in S2 data being a convenient reference to spatially improve S3 products, as well as those of the forthcoming Fluorescence Explorer (FLEX) mission, since image registration is the initial requirement to obtain higher data processing level products. To validate our method, we compare the proposed FloU-Net with other state-of-the-art techniques using 21 coupled S2/S3 optical images from different locations of interest across Europe. The comparison is performed through different performance measures. Results show that proposed FloU-Net can outperform the compared methods. The code and dataset are available in https://github.com/ibanezfd/FloU-Net.</description><identifier>ISSN: 1545-598X</identifier><identifier>DOI: 10.1109/LGRS.2023.3249902</identifier><identifier>CODEN: IGRSBY</identifier><language>eng</language><publisher>IEEE</publisher><subject>Convolutional Neural Networks ; Deformation ; Feature extraction ; Image registration ; Inter-sensor ; Multi-modal ; Multi-spectral ; Optical imaging ; Optical sensors ; Sensors ; Sentinel-2-3 ; Spatial resolution</subject><ispartof>IEEE geoscience and remote sensing letters, 2023-02, p.1-1</ispartof><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><orcidid>0000-0002-3252-1252 ; 0000-0003-0054-3489 ; 0000-0003-1374-8416</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10054383$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,776,780,792,27903,27904,54737</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/10054383$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Ibanez, Damian</creatorcontrib><creatorcontrib>Fernandez-Beltran, Ruben</creatorcontrib><creatorcontrib>Pla, Filiberto</creatorcontrib><title>FloU-Net: An Optical Flow Network for Multi-modal Self-Supervised Image Registration</title><title>IEEE geoscience and remote sensing letters</title><addtitle>LGRS</addtitle><description>Image registration is an essential task in image processing, where the final objective is to geometrically align two or more images. In remote sensing, this process allows comparing, fusing or analyzing data, specially when multi-modal images are used. In addition, multi-modal image registration becomes fairly challenging when the images have a significant difference in scale and resolution, together with local small image deformations. For this purpose, this paper presents a novel optical flow-based image registration network, named the FloU-Net, which tries to further exploit inter-sensor synergies by means of deep learning. The proposed method is able to extract spatial information from resolution differences and through an U-Net backbone generate an optical flow field estimation to accurately register small local deformations of multi-modal images in a self-supervised fashion. For instance, the registration between Sentinel-2 (S2) and Sentinel-3 (S3) optical data is not trivial, as there are considerable spectral-spatial differences among their sensors. In this case, the higher spatial resolution of S2 result in S2 data being a convenient reference to spatially improve S3 products, as well as those of the forthcoming Fluorescence Explorer (FLEX) mission, since image registration is the initial requirement to obtain higher data processing level products. To validate our method, we compare the proposed FloU-Net with other state-of-the-art techniques using 21 coupled S2/S3 optical images from different locations of interest across Europe. The comparison is performed through different performance measures. Results show that proposed FloU-Net can outperform the compared methods. The code and dataset are available in https://github.com/ibanezfd/FloU-Net.</description><subject>Convolutional Neural Networks</subject><subject>Deformation</subject><subject>Feature extraction</subject><subject>Image registration</subject><subject>Inter-sensor</subject><subject>Multi-modal</subject><subject>Multi-spectral</subject><subject>Optical imaging</subject><subject>Optical sensors</subject><subject>Sensors</subject><subject>Sentinel-2-3</subject><subject>Spatial resolution</subject><issn>1545-598X</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNqFyUkKwjAYQOEsFBwPILjIBVIztNi6E3ECB7AK7iTYvyWampJExdvbhXtXD76H0IDRgDGajDbLQxpwykUgeJgklDdQm0VhRKIkPrdQx7kbpTyM43EbHRfanMgO_ARPH3hfeXWVGtf4xjW-jb3j3Fi8fWqvSGmyeqagc5I-K7Av5SDD61IWgA9QKOet9Mo8eqiZS-2g_2sXDRfz42xFFABcKqtKaT8XRmkUiliIP_sLUww_CQ</recordid><startdate>20230227</startdate><enddate>20230227</enddate><creator>Ibanez, Damian</creator><creator>Fernandez-Beltran, Ruben</creator><creator>Pla, Filiberto</creator><general>IEEE</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><orcidid>https://orcid.org/0000-0002-3252-1252</orcidid><orcidid>https://orcid.org/0000-0003-0054-3489</orcidid><orcidid>https://orcid.org/0000-0003-1374-8416</orcidid></search><sort><creationdate>20230227</creationdate><title>FloU-Net: An Optical Flow Network for Multi-modal Self-Supervised Image Registration</title><author>Ibanez, Damian ; Fernandez-Beltran, Ruben ; Pla, Filiberto</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-ieee_primary_100543833</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Convolutional Neural Networks</topic><topic>Deformation</topic><topic>Feature extraction</topic><topic>Image registration</topic><topic>Inter-sensor</topic><topic>Multi-modal</topic><topic>Multi-spectral</topic><topic>Optical imaging</topic><topic>Optical sensors</topic><topic>Sensors</topic><topic>Sentinel-2-3</topic><topic>Spatial resolution</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Ibanez, Damian</creatorcontrib><creatorcontrib>Fernandez-Beltran, Ruben</creatorcontrib><creatorcontrib>Pla, Filiberto</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><jtitle>IEEE geoscience and remote sensing letters</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Ibanez, Damian</au><au>Fernandez-Beltran, Ruben</au><au>Pla, Filiberto</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>FloU-Net: An Optical Flow Network for Multi-modal Self-Supervised Image Registration</atitle><jtitle>IEEE geoscience and remote sensing letters</jtitle><stitle>LGRS</stitle><date>2023-02-27</date><risdate>2023</risdate><spage>1</spage><epage>1</epage><pages>1-1</pages><issn>1545-598X</issn><coden>IGRSBY</coden><abstract>Image registration is an essential task in image processing, where the final objective is to geometrically align two or more images. In remote sensing, this process allows comparing, fusing or analyzing data, specially when multi-modal images are used. In addition, multi-modal image registration becomes fairly challenging when the images have a significant difference in scale and resolution, together with local small image deformations. For this purpose, this paper presents a novel optical flow-based image registration network, named the FloU-Net, which tries to further exploit inter-sensor synergies by means of deep learning. The proposed method is able to extract spatial information from resolution differences and through an U-Net backbone generate an optical flow field estimation to accurately register small local deformations of multi-modal images in a self-supervised fashion. For instance, the registration between Sentinel-2 (S2) and Sentinel-3 (S3) optical data is not trivial, as there are considerable spectral-spatial differences among their sensors. In this case, the higher spatial resolution of S2 result in S2 data being a convenient reference to spatially improve S3 products, as well as those of the forthcoming Fluorescence Explorer (FLEX) mission, since image registration is the initial requirement to obtain higher data processing level products. To validate our method, we compare the proposed FloU-Net with other state-of-the-art techniques using 21 coupled S2/S3 optical images from different locations of interest across Europe. The comparison is performed through different performance measures. Results show that proposed FloU-Net can outperform the compared methods. The code and dataset are available in https://github.com/ibanezfd/FloU-Net.</abstract><pub>IEEE</pub><doi>10.1109/LGRS.2023.3249902</doi><orcidid>https://orcid.org/0000-0002-3252-1252</orcidid><orcidid>https://orcid.org/0000-0003-0054-3489</orcidid><orcidid>https://orcid.org/0000-0003-1374-8416</orcidid></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 1545-598X |
ispartof | IEEE geoscience and remote sensing letters, 2023-02, p.1-1 |
issn | 1545-598X |
language | eng |
recordid | cdi_ieee_primary_10054383 |
source | IEEE Electronic Library (IEL) |
subjects | Convolutional Neural Networks Deformation Feature extraction Image registration Inter-sensor Multi-modal Multi-spectral Optical imaging Optical sensors Sensors Sentinel-2-3 Spatial resolution |
title | FloU-Net: An Optical Flow Network for Multi-modal Self-Supervised Image Registration |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-21T20%3A05%3A40IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=FloU-Net:%20An%20Optical%20Flow%20Network%20for%20Multi-modal%20Self-Supervised%20Image%20Registration&rft.jtitle=IEEE%20geoscience%20and%20remote%20sensing%20letters&rft.au=Ibanez,%20Damian&rft.date=2023-02-27&rft.spage=1&rft.epage=1&rft.pages=1-1&rft.issn=1545-598X&rft.coden=IGRSBY&rft_id=info:doi/10.1109/LGRS.2023.3249902&rft_dat=%3Cieee_RIE%3E10054383%3C/ieee_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=10054383&rfr_iscdi=true |