Enhanced fringe-to-phase framework using deep learning

In Fringe Projection Profilometry (FPP), achieving robust and accurate 3D reconstruction with a limited number of fringe patterns remains a challenge in structured light 3D imaging. Conventional methods require a set of fringe images, but using only one or two patterns complicates phase recovery and...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Kim, Won-Hoe, Kim, Bongjoong, Chi, Hyung-Gun, Hyun, Jae-Sang
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Kim, Won-Hoe
Kim, Bongjoong
Chi, Hyung-Gun
Hyun, Jae-Sang
description In Fringe Projection Profilometry (FPP), achieving robust and accurate 3D reconstruction with a limited number of fringe patterns remains a challenge in structured light 3D imaging. Conventional methods require a set of fringe images, but using only one or two patterns complicates phase recovery and unwrapping. In this study, we introduce SFNet, a symmetric fusion network that transforms two fringe images into an absolute phase. To enhance output reliability, Our framework predicts refined phases by incorporating information from fringe images of a different frequency than those used as input. This allows us to achieve high accuracy with just two images. Comparative experiments and ablation studies validate the effectiveness of our proposed method. The dataset and code are publicly accessible on our project page https://wonhoe-kim.github.io/SFNet.
doi_str_mv 10.48550/arxiv.2402.00977
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2402_00977</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2402_00977</sourcerecordid><originalsourceid>FETCH-LOGICAL-a677-d48d0af552973340cea7be408a0fb4f6d34c40c2c1462738603be19676e8780f3</originalsourceid><addsrcrecordid>eNotj8FuwjAQRH3hgGg_gBP5Aacb2_E6xwpBWwmpF-7RJl5DRBIih9Ly96S0p9GbkUYzQiwzSI3Lc3ih-NNcU2VApQAF4lzYTX-kvmafhNj0B5aXsxyONPLE1PH3OZ6Sr3FKEs88JC1T7Cd6ErNA7cjP_7oQ--1mv36Xu8-3j_XrTpJFlN44DxTyXBWotYGaCSs24AhCZYL12tSTq-rMWIXaWdAVZ4VFyw4dBL0Qq7_ax_ByiE1H8Vb-HigfB_Qd6nE_kg</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Enhanced fringe-to-phase framework using deep learning</title><source>arXiv.org</source><creator>Kim, Won-Hoe ; Kim, Bongjoong ; Chi, Hyung-Gun ; Hyun, Jae-Sang</creator><creatorcontrib>Kim, Won-Hoe ; Kim, Bongjoong ; Chi, Hyung-Gun ; Hyun, Jae-Sang</creatorcontrib><description>In Fringe Projection Profilometry (FPP), achieving robust and accurate 3D reconstruction with a limited number of fringe patterns remains a challenge in structured light 3D imaging. Conventional methods require a set of fringe images, but using only one or two patterns complicates phase recovery and unwrapping. In this study, we introduce SFNet, a symmetric fusion network that transforms two fringe images into an absolute phase. To enhance output reliability, Our framework predicts refined phases by incorporating information from fringe images of a different frequency than those used as input. This allows us to achieve high accuracy with just two images. Comparative experiments and ablation studies validate the effectiveness of our proposed method. The dataset and code are publicly accessible on our project page https://wonhoe-kim.github.io/SFNet.</description><identifier>DOI: 10.48550/arxiv.2402.00977</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2024-02</creationdate><rights>http://creativecommons.org/licenses/by-nc-nd/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,781,886</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2402.00977$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2402.00977$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Kim, Won-Hoe</creatorcontrib><creatorcontrib>Kim, Bongjoong</creatorcontrib><creatorcontrib>Chi, Hyung-Gun</creatorcontrib><creatorcontrib>Hyun, Jae-Sang</creatorcontrib><title>Enhanced fringe-to-phase framework using deep learning</title><description>In Fringe Projection Profilometry (FPP), achieving robust and accurate 3D reconstruction with a limited number of fringe patterns remains a challenge in structured light 3D imaging. Conventional methods require a set of fringe images, but using only one or two patterns complicates phase recovery and unwrapping. In this study, we introduce SFNet, a symmetric fusion network that transforms two fringe images into an absolute phase. To enhance output reliability, Our framework predicts refined phases by incorporating information from fringe images of a different frequency than those used as input. This allows us to achieve high accuracy with just two images. Comparative experiments and ablation studies validate the effectiveness of our proposed method. The dataset and code are publicly accessible on our project page https://wonhoe-kim.github.io/SFNet.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8FuwjAQRH3hgGg_gBP5Aacb2_E6xwpBWwmpF-7RJl5DRBIih9Ly96S0p9GbkUYzQiwzSI3Lc3ih-NNcU2VApQAF4lzYTX-kvmafhNj0B5aXsxyONPLE1PH3OZ6Sr3FKEs88JC1T7Cd6ErNA7cjP_7oQ--1mv36Xu8-3j_XrTpJFlN44DxTyXBWotYGaCSs24AhCZYL12tSTq-rMWIXaWdAVZ4VFyw4dBL0Qq7_ax_ByiE1H8Vb-HigfB_Qd6nE_kg</recordid><startdate>20240201</startdate><enddate>20240201</enddate><creator>Kim, Won-Hoe</creator><creator>Kim, Bongjoong</creator><creator>Chi, Hyung-Gun</creator><creator>Hyun, Jae-Sang</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240201</creationdate><title>Enhanced fringe-to-phase framework using deep learning</title><author>Kim, Won-Hoe ; Kim, Bongjoong ; Chi, Hyung-Gun ; Hyun, Jae-Sang</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a677-d48d0af552973340cea7be408a0fb4f6d34c40c2c1462738603be19676e8780f3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Kim, Won-Hoe</creatorcontrib><creatorcontrib>Kim, Bongjoong</creatorcontrib><creatorcontrib>Chi, Hyung-Gun</creatorcontrib><creatorcontrib>Hyun, Jae-Sang</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Kim, Won-Hoe</au><au>Kim, Bongjoong</au><au>Chi, Hyung-Gun</au><au>Hyun, Jae-Sang</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Enhanced fringe-to-phase framework using deep learning</atitle><date>2024-02-01</date><risdate>2024</risdate><abstract>In Fringe Projection Profilometry (FPP), achieving robust and accurate 3D reconstruction with a limited number of fringe patterns remains a challenge in structured light 3D imaging. Conventional methods require a set of fringe images, but using only one or two patterns complicates phase recovery and unwrapping. In this study, we introduce SFNet, a symmetric fusion network that transforms two fringe images into an absolute phase. To enhance output reliability, Our framework predicts refined phases by incorporating information from fringe images of a different frequency than those used as input. This allows us to achieve high accuracy with just two images. Comparative experiments and ablation studies validate the effectiveness of our proposed method. The dataset and code are publicly accessible on our project page https://wonhoe-kim.github.io/SFNet.</abstract><doi>10.48550/arxiv.2402.00977</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2402.00977
ispartof
issn
language eng
recordid cdi_arxiv_primary_2402_00977
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
title Enhanced fringe-to-phase framework using deep learning
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-14T20%3A46%3A48IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Enhanced%20fringe-to-phase%20framework%20using%20deep%20learning&rft.au=Kim,%20Won-Hoe&rft.date=2024-02-01&rft_id=info:doi/10.48550/arxiv.2402.00977&rft_dat=%3Carxiv_GOX%3E2402_00977%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true