GAURA: Generalizable Approach for Unified Restoration and Rendering of Arbitrary Views
Neural rendering methods can achieve near-photorealistic image synthesis of scenes from posed input images. However, when the images are imperfect, e.g., captured in very low-light conditions, state-of-the-art methods fail to reconstruct high-quality 3D scenes. Recent approaches have tried to addres...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2024-07 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Gupta, Vinayak Rongali Simhachala Venkata Girish Mukund Varma T Tewari, Ayush Mitra, Kaushik |
description | Neural rendering methods can achieve near-photorealistic image synthesis of scenes from posed input images. However, when the images are imperfect, e.g., captured in very low-light conditions, state-of-the-art methods fail to reconstruct high-quality 3D scenes. Recent approaches have tried to address this limitation by modeling various degradation processes in the image formation model; however, this limits them to specific image degradations. In this paper, we propose a generalizable neural rendering method that can perform high-fidelity novel view synthesis under several degradations. Our method, GAURA, is learning-based and does not require any test-time scene-specific optimization. It is trained on a synthetic dataset that includes several degradation types. GAURA outperforms state-of-the-art methods on several benchmarks for low-light enhancement, dehazing, deraining, and on-par for motion deblurring. Further, our model can be efficiently fine-tuned to any new incoming degradation using minimal data. We thus demonstrate adaptation results on two unseen degradations, desnowing and removing defocus blur. Code and video results are available at vinayak-vg.github.io/GAURA. |
format | Article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_3079558706</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3079558706</sourcerecordid><originalsourceid>FETCH-proquest_journals_30795587063</originalsourceid><addsrcrecordid>eNqNjNEKgjAYRkcQJOU7_NC1sLam1p1E2bWktzJz1kQ2-6dEPX0GPUBXh4_zcWbEY5xvgnjL2IL4zrWUUhZGTAjukSJN8izZQ6qMQtnpt6w6BUnfo5XXOzQWITe60aqGTLnBohy0NSDNd5taoTY3sA0kWOkBJb6g0OrpVmTeyM4p_8clWZ-Ol8M5mLqPcQqVrR3RTKrkNNoJEUc05P-9PpVpQJU</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3079558706</pqid></control><display><type>article</type><title>GAURA: Generalizable Approach for Unified Restoration and Rendering of Arbitrary Views</title><source>Free E- Journals</source><creator>Gupta, Vinayak ; Rongali Simhachala Venkata Girish ; Mukund Varma T ; Tewari, Ayush ; Mitra, Kaushik</creator><creatorcontrib>Gupta, Vinayak ; Rongali Simhachala Venkata Girish ; Mukund Varma T ; Tewari, Ayush ; Mitra, Kaushik</creatorcontrib><description>Neural rendering methods can achieve near-photorealistic image synthesis of scenes from posed input images. However, when the images are imperfect, e.g., captured in very low-light conditions, state-of-the-art methods fail to reconstruct high-quality 3D scenes. Recent approaches have tried to address this limitation by modeling various degradation processes in the image formation model; however, this limits them to specific image degradations. In this paper, we propose a generalizable neural rendering method that can perform high-fidelity novel view synthesis under several degradations. Our method, GAURA, is learning-based and does not require any test-time scene-specific optimization. It is trained on a synthetic dataset that includes several degradation types. GAURA outperforms state-of-the-art methods on several benchmarks for low-light enhancement, dehazing, deraining, and on-par for motion deblurring. Further, our model can be efficiently fine-tuned to any new incoming degradation using minimal data. We thus demonstrate adaptation results on two unseen degradations, desnowing and removing defocus blur. Code and video results are available at vinayak-vg.github.io/GAURA.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Image degradation ; Image quality ; Rendering ; Synthetic data</subject><ispartof>arXiv.org, 2024-07</ispartof><rights>2024. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>781,785</link.rule.ids></links><search><creatorcontrib>Gupta, Vinayak</creatorcontrib><creatorcontrib>Rongali Simhachala Venkata Girish</creatorcontrib><creatorcontrib>Mukund Varma T</creatorcontrib><creatorcontrib>Tewari, Ayush</creatorcontrib><creatorcontrib>Mitra, Kaushik</creatorcontrib><title>GAURA: Generalizable Approach for Unified Restoration and Rendering of Arbitrary Views</title><title>arXiv.org</title><description>Neural rendering methods can achieve near-photorealistic image synthesis of scenes from posed input images. However, when the images are imperfect, e.g., captured in very low-light conditions, state-of-the-art methods fail to reconstruct high-quality 3D scenes. Recent approaches have tried to address this limitation by modeling various degradation processes in the image formation model; however, this limits them to specific image degradations. In this paper, we propose a generalizable neural rendering method that can perform high-fidelity novel view synthesis under several degradations. Our method, GAURA, is learning-based and does not require any test-time scene-specific optimization. It is trained on a synthetic dataset that includes several degradation types. GAURA outperforms state-of-the-art methods on several benchmarks for low-light enhancement, dehazing, deraining, and on-par for motion deblurring. Further, our model can be efficiently fine-tuned to any new incoming degradation using minimal data. We thus demonstrate adaptation results on two unseen degradations, desnowing and removing defocus blur. Code and video results are available at vinayak-vg.github.io/GAURA.</description><subject>Image degradation</subject><subject>Image quality</subject><subject>Rendering</subject><subject>Synthetic data</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNjNEKgjAYRkcQJOU7_NC1sLam1p1E2bWktzJz1kQ2-6dEPX0GPUBXh4_zcWbEY5xvgnjL2IL4zrWUUhZGTAjukSJN8izZQ6qMQtnpt6w6BUnfo5XXOzQWITe60aqGTLnBohy0NSDNd5taoTY3sA0kWOkBJb6g0OrpVmTeyM4p_8clWZ-Ol8M5mLqPcQqVrR3RTKrkNNoJEUc05P-9PpVpQJU</recordid><startdate>20240711</startdate><enddate>20240711</enddate><creator>Gupta, Vinayak</creator><creator>Rongali Simhachala Venkata Girish</creator><creator>Mukund Varma T</creator><creator>Tewari, Ayush</creator><creator>Mitra, Kaushik</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20240711</creationdate><title>GAURA: Generalizable Approach for Unified Restoration and Rendering of Arbitrary Views</title><author>Gupta, Vinayak ; Rongali Simhachala Venkata Girish ; Mukund Varma T ; Tewari, Ayush ; Mitra, Kaushik</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_30795587063</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Image degradation</topic><topic>Image quality</topic><topic>Rendering</topic><topic>Synthetic data</topic><toplevel>online_resources</toplevel><creatorcontrib>Gupta, Vinayak</creatorcontrib><creatorcontrib>Rongali Simhachala Venkata Girish</creatorcontrib><creatorcontrib>Mukund Varma T</creatorcontrib><creatorcontrib>Tewari, Ayush</creatorcontrib><creatorcontrib>Mitra, Kaushik</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Gupta, Vinayak</au><au>Rongali Simhachala Venkata Girish</au><au>Mukund Varma T</au><au>Tewari, Ayush</au><au>Mitra, Kaushik</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>GAURA: Generalizable Approach for Unified Restoration and Rendering of Arbitrary Views</atitle><jtitle>arXiv.org</jtitle><date>2024-07-11</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>Neural rendering methods can achieve near-photorealistic image synthesis of scenes from posed input images. However, when the images are imperfect, e.g., captured in very low-light conditions, state-of-the-art methods fail to reconstruct high-quality 3D scenes. Recent approaches have tried to address this limitation by modeling various degradation processes in the image formation model; however, this limits them to specific image degradations. In this paper, we propose a generalizable neural rendering method that can perform high-fidelity novel view synthesis under several degradations. Our method, GAURA, is learning-based and does not require any test-time scene-specific optimization. It is trained on a synthetic dataset that includes several degradation types. GAURA outperforms state-of-the-art methods on several benchmarks for low-light enhancement, dehazing, deraining, and on-par for motion deblurring. Further, our model can be efficiently fine-tuned to any new incoming degradation using minimal data. We thus demonstrate adaptation results on two unseen degradations, desnowing and removing defocus blur. Code and video results are available at vinayak-vg.github.io/GAURA.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2024-07 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_3079558706 |
source | Free E- Journals |
subjects | Image degradation Image quality Rendering Synthetic data |
title | GAURA: Generalizable Approach for Unified Restoration and Rendering of Arbitrary Views |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-18T08%3A20%3A10IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=GAURA:%20Generalizable%20Approach%20for%20Unified%20Restoration%20and%20Rendering%20of%20Arbitrary%20Views&rft.jtitle=arXiv.org&rft.au=Gupta,%20Vinayak&rft.date=2024-07-11&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E3079558706%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3079558706&rft_id=info:pmid/&rfr_iscdi=true |