Compressive Light Field Reconstructions using Deep Learning

Light field imaging is limited in its computational processing demands of high sampling for both spatial and angular dimensions. Single-shot light field cameras sacrifice spatial resolution to sample angular viewpoints, typically by multiplexing incoming rays onto a 2D sensor array. While this resol...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Gupta, Mayank, Jauhari, Arjun, Kulkarni, Kuldeep, Jayasuriya, Suren, Molnar, Alyosha, Turaga, Pavan
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Gupta, Mayank
Jauhari, Arjun
Kulkarni, Kuldeep
Jayasuriya, Suren
Molnar, Alyosha
Turaga, Pavan
description Light field imaging is limited in its computational processing demands of high sampling for both spatial and angular dimensions. Single-shot light field cameras sacrifice spatial resolution to sample angular viewpoints, typically by multiplexing incoming rays onto a 2D sensor array. While this resolution can be recovered using compressive sensing, these iterative solutions are slow in processing a light field. We present a deep learning approach using a new, two branch network architecture, consisting jointly of an autoencoder and a 4D CNN, to recover a high resolution 4D light field from a single coded 2D image. This network decreases reconstruction time significantly while achieving average PSNR values of 26-32 dB on a variety of light fields. In particular, reconstruction time is decreased from 35 minutes to 6.7 minutes as compared to the dictionary method for equivalent visual quality. These reconstructions are performed at small sampling/compression ratios as low as 8%, allowing for cheaper coded light field cameras. We test our network reconstructions on synthetic light fields, simulated coded measurements of real light fields captured from a Lytro Illum camera, and real coded images from a custom CMOS diffractive light field camera. The combination of compressive light field capture with deep learning allows the potential for real-time light field video acquisition systems in the future.
doi_str_mv 10.48550/arxiv.1802.01722
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_1802_01722</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>1802_01722</sourcerecordid><originalsourceid>FETCH-LOGICAL-a672-d3b5e6da2405b737c85d77c7a33f077667b21048ea699c751795aeda0c5c50b63</originalsourceid><addsrcrecordid>eNotj8FOwzAQRH3hgAofwAn_QMLajr2JOKFAASlSJdR7tLG3xVKbRnZawd9TCqeZuTzNE-JOQVnV1sIDpa94KlUNugSFWl-Lx_awnxLnHE8su7j9nOUy8i7ID_aHMc_p6Od4LvKY47iVz8yT7JjSeF434mpDu8y3_7kQ6-XLun0rutXre_vUFeRQF8EMll0gXYEd0KCvbUD0SMZsANE5HLSCqmZyTePRKmwscSDw1lsYnFmI-z_s5X0_pbin9N3_WvQXC_MDiUpBrg</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Compressive Light Field Reconstructions using Deep Learning</title><source>arXiv.org</source><creator>Gupta, Mayank ; Jauhari, Arjun ; Kulkarni, Kuldeep ; Jayasuriya, Suren ; Molnar, Alyosha ; Turaga, Pavan</creator><creatorcontrib>Gupta, Mayank ; Jauhari, Arjun ; Kulkarni, Kuldeep ; Jayasuriya, Suren ; Molnar, Alyosha ; Turaga, Pavan</creatorcontrib><description>Light field imaging is limited in its computational processing demands of high sampling for both spatial and angular dimensions. Single-shot light field cameras sacrifice spatial resolution to sample angular viewpoints, typically by multiplexing incoming rays onto a 2D sensor array. While this resolution can be recovered using compressive sensing, these iterative solutions are slow in processing a light field. We present a deep learning approach using a new, two branch network architecture, consisting jointly of an autoencoder and a 4D CNN, to recover a high resolution 4D light field from a single coded 2D image. This network decreases reconstruction time significantly while achieving average PSNR values of 26-32 dB on a variety of light fields. In particular, reconstruction time is decreased from 35 minutes to 6.7 minutes as compared to the dictionary method for equivalent visual quality. These reconstructions are performed at small sampling/compression ratios as low as 8%, allowing for cheaper coded light field cameras. We test our network reconstructions on synthetic light fields, simulated coded measurements of real light fields captured from a Lytro Illum camera, and real coded images from a custom CMOS diffractive light field camera. The combination of compressive light field capture with deep learning allows the potential for real-time light field video acquisition systems in the future.</description><identifier>DOI: 10.48550/arxiv.1802.01722</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2018-02</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/1802.01722$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.1802.01722$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Gupta, Mayank</creatorcontrib><creatorcontrib>Jauhari, Arjun</creatorcontrib><creatorcontrib>Kulkarni, Kuldeep</creatorcontrib><creatorcontrib>Jayasuriya, Suren</creatorcontrib><creatorcontrib>Molnar, Alyosha</creatorcontrib><creatorcontrib>Turaga, Pavan</creatorcontrib><title>Compressive Light Field Reconstructions using Deep Learning</title><description>Light field imaging is limited in its computational processing demands of high sampling for both spatial and angular dimensions. Single-shot light field cameras sacrifice spatial resolution to sample angular viewpoints, typically by multiplexing incoming rays onto a 2D sensor array. While this resolution can be recovered using compressive sensing, these iterative solutions are slow in processing a light field. We present a deep learning approach using a new, two branch network architecture, consisting jointly of an autoencoder and a 4D CNN, to recover a high resolution 4D light field from a single coded 2D image. This network decreases reconstruction time significantly while achieving average PSNR values of 26-32 dB on a variety of light fields. In particular, reconstruction time is decreased from 35 minutes to 6.7 minutes as compared to the dictionary method for equivalent visual quality. These reconstructions are performed at small sampling/compression ratios as low as 8%, allowing for cheaper coded light field cameras. We test our network reconstructions on synthetic light fields, simulated coded measurements of real light fields captured from a Lytro Illum camera, and real coded images from a custom CMOS diffractive light field camera. The combination of compressive light field capture with deep learning allows the potential for real-time light field video acquisition systems in the future.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2018</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8FOwzAQRH3hgAofwAn_QMLajr2JOKFAASlSJdR7tLG3xVKbRnZawd9TCqeZuTzNE-JOQVnV1sIDpa94KlUNugSFWl-Lx_awnxLnHE8su7j9nOUy8i7ID_aHMc_p6Od4LvKY47iVz8yT7JjSeF434mpDu8y3_7kQ6-XLun0rutXre_vUFeRQF8EMll0gXYEd0KCvbUD0SMZsANE5HLSCqmZyTePRKmwscSDw1lsYnFmI-z_s5X0_pbin9N3_WvQXC_MDiUpBrg</recordid><startdate>20180205</startdate><enddate>20180205</enddate><creator>Gupta, Mayank</creator><creator>Jauhari, Arjun</creator><creator>Kulkarni, Kuldeep</creator><creator>Jayasuriya, Suren</creator><creator>Molnar, Alyosha</creator><creator>Turaga, Pavan</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20180205</creationdate><title>Compressive Light Field Reconstructions using Deep Learning</title><author>Gupta, Mayank ; Jauhari, Arjun ; Kulkarni, Kuldeep ; Jayasuriya, Suren ; Molnar, Alyosha ; Turaga, Pavan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a672-d3b5e6da2405b737c85d77c7a33f077667b21048ea699c751795aeda0c5c50b63</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2018</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Gupta, Mayank</creatorcontrib><creatorcontrib>Jauhari, Arjun</creatorcontrib><creatorcontrib>Kulkarni, Kuldeep</creatorcontrib><creatorcontrib>Jayasuriya, Suren</creatorcontrib><creatorcontrib>Molnar, Alyosha</creatorcontrib><creatorcontrib>Turaga, Pavan</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Gupta, Mayank</au><au>Jauhari, Arjun</au><au>Kulkarni, Kuldeep</au><au>Jayasuriya, Suren</au><au>Molnar, Alyosha</au><au>Turaga, Pavan</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Compressive Light Field Reconstructions using Deep Learning</atitle><date>2018-02-05</date><risdate>2018</risdate><abstract>Light field imaging is limited in its computational processing demands of high sampling for both spatial and angular dimensions. Single-shot light field cameras sacrifice spatial resolution to sample angular viewpoints, typically by multiplexing incoming rays onto a 2D sensor array. While this resolution can be recovered using compressive sensing, these iterative solutions are slow in processing a light field. We present a deep learning approach using a new, two branch network architecture, consisting jointly of an autoencoder and a 4D CNN, to recover a high resolution 4D light field from a single coded 2D image. This network decreases reconstruction time significantly while achieving average PSNR values of 26-32 dB on a variety of light fields. In particular, reconstruction time is decreased from 35 minutes to 6.7 minutes as compared to the dictionary method for equivalent visual quality. These reconstructions are performed at small sampling/compression ratios as low as 8%, allowing for cheaper coded light field cameras. We test our network reconstructions on synthetic light fields, simulated coded measurements of real light fields captured from a Lytro Illum camera, and real coded images from a custom CMOS diffractive light field camera. The combination of compressive light field capture with deep learning allows the potential for real-time light field video acquisition systems in the future.</abstract><doi>10.48550/arxiv.1802.01722</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.1802.01722
ispartof
issn
language eng
recordid cdi_arxiv_primary_1802_01722
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
title Compressive Light Field Reconstructions using Deep Learning
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-05T10%3A12%3A49IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Compressive%20Light%20Field%20Reconstructions%20using%20Deep%20Learning&rft.au=Gupta,%20Mayank&rft.date=2018-02-05&rft_id=info:doi/10.48550/arxiv.1802.01722&rft_dat=%3Carxiv_GOX%3E1802_01722%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true