Enhanced Frame and Event-Based Simulator and Event-Based Video Interpolation Network

Fast neuromorphic event-based vision sensors (Dynamic Vision Sensor, DVS) can be combined with slower conventional frame-based sensors to enable higher-quality inter-frame interpolation than traditional methods relying on fixed motion approximations using e.g. optical flow. In this work we present a...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Radomski, Adam, Georgiou, Andreas, Debrunner, Thomas, Li, Chenghan, Longinotti, Luca, Seo, Minwon, Kwak, Moosung, Shin, Chang-Woo, Park, Paul K. J, Ryu, Hyunsurk Eric, Eng, Kynan
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Radomski, Adam
Georgiou, Andreas
Debrunner, Thomas
Li, Chenghan
Longinotti, Luca
Seo, Minwon
Kwak, Moosung
Shin, Chang-Woo
Park, Paul K. J
Ryu, Hyunsurk Eric
Eng, Kynan
description Fast neuromorphic event-based vision sensors (Dynamic Vision Sensor, DVS) can be combined with slower conventional frame-based sensors to enable higher-quality inter-frame interpolation than traditional methods relying on fixed motion approximations using e.g. optical flow. In this work we present a new, advanced event simulator that can produce realistic scenes recorded by a camera rig with an arbitrary number of sensors located at fixed offsets. It includes a new configurable frame-based image sensor model with realistic image quality reduction effects, and an extended DVS model with more accurate characteristics. We use our simulator to train a novel reconstruction model designed for end-to-end reconstruction of high-fps video. Unlike previously published methods, our method does not require the frame and DVS cameras to have the same optics, positions, or camera resolutions. It is also not limited to objects a fixed distance from the sensor. We show that data generated by our simulator can be used to train our new model, leading to reconstructed images on public datasets of equivalent or better quality than the state of the art. We also show our sensor generalizing to data recorded by real sensors.
doi_str_mv 10.48550/arxiv.2112.09379
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2112_09379</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2112_09379</sourcerecordid><originalsourceid>FETCH-LOGICAL-a679-3538fdbe24e246295ea4421fe6d917fd4deddab06896dfa0a526a69c29a409a03</originalsourceid><addsrcrecordid>eNplj0FLAzEUhHPxINUf4Mn8gV2TbJLdHLVstVD04OJ1ee17oaHdpKRr1X_vWr0JAwMzw8DH2I0UpW6MEXeQP8OpVFKqUriqdpesa-MW4oaQLzIMxCEib08Ux-IBjlP6Gob3PYwp_2veAlLiyzhSPqRpElLkzzR-pLy7Yhce9ke6_vMZ6xZtN38qVi-Py_n9qgBbu6IyVeNxTUpPssoZAq2V9GTRydqjRkKEtbCNs-hBgFEWrNsoB1o4ENWM3f7enrH6Qw4D5K_-B68_41XfTxFK9Q</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Enhanced Frame and Event-Based Simulator and Event-Based Video Interpolation Network</title><source>arXiv.org</source><creator>Radomski, Adam ; Georgiou, Andreas ; Debrunner, Thomas ; Li, Chenghan ; Longinotti, Luca ; Seo, Minwon ; Kwak, Moosung ; Shin, Chang-Woo ; Park, Paul K. J ; Ryu, Hyunsurk Eric ; Eng, Kynan</creator><creatorcontrib>Radomski, Adam ; Georgiou, Andreas ; Debrunner, Thomas ; Li, Chenghan ; Longinotti, Luca ; Seo, Minwon ; Kwak, Moosung ; Shin, Chang-Woo ; Park, Paul K. J ; Ryu, Hyunsurk Eric ; Eng, Kynan</creatorcontrib><description>Fast neuromorphic event-based vision sensors (Dynamic Vision Sensor, DVS) can be combined with slower conventional frame-based sensors to enable higher-quality inter-frame interpolation than traditional methods relying on fixed motion approximations using e.g. optical flow. In this work we present a new, advanced event simulator that can produce realistic scenes recorded by a camera rig with an arbitrary number of sensors located at fixed offsets. It includes a new configurable frame-based image sensor model with realistic image quality reduction effects, and an extended DVS model with more accurate characteristics. We use our simulator to train a novel reconstruction model designed for end-to-end reconstruction of high-fps video. Unlike previously published methods, our method does not require the frame and DVS cameras to have the same optics, positions, or camera resolutions. It is also not limited to objects a fixed distance from the sensor. We show that data generated by our simulator can be used to train our new model, leading to reconstructed images on public datasets of equivalent or better quality than the state of the art. We also show our sensor generalizing to data recorded by real sensors.</description><identifier>DOI: 10.48550/arxiv.2112.09379</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2021-12</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2112.09379$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2112.09379$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Radomski, Adam</creatorcontrib><creatorcontrib>Georgiou, Andreas</creatorcontrib><creatorcontrib>Debrunner, Thomas</creatorcontrib><creatorcontrib>Li, Chenghan</creatorcontrib><creatorcontrib>Longinotti, Luca</creatorcontrib><creatorcontrib>Seo, Minwon</creatorcontrib><creatorcontrib>Kwak, Moosung</creatorcontrib><creatorcontrib>Shin, Chang-Woo</creatorcontrib><creatorcontrib>Park, Paul K. J</creatorcontrib><creatorcontrib>Ryu, Hyunsurk Eric</creatorcontrib><creatorcontrib>Eng, Kynan</creatorcontrib><title>Enhanced Frame and Event-Based Simulator and Event-Based Video Interpolation Network</title><description>Fast neuromorphic event-based vision sensors (Dynamic Vision Sensor, DVS) can be combined with slower conventional frame-based sensors to enable higher-quality inter-frame interpolation than traditional methods relying on fixed motion approximations using e.g. optical flow. In this work we present a new, advanced event simulator that can produce realistic scenes recorded by a camera rig with an arbitrary number of sensors located at fixed offsets. It includes a new configurable frame-based image sensor model with realistic image quality reduction effects, and an extended DVS model with more accurate characteristics. We use our simulator to train a novel reconstruction model designed for end-to-end reconstruction of high-fps video. Unlike previously published methods, our method does not require the frame and DVS cameras to have the same optics, positions, or camera resolutions. It is also not limited to objects a fixed distance from the sensor. We show that data generated by our simulator can be used to train our new model, leading to reconstructed images on public datasets of equivalent or better quality than the state of the art. We also show our sensor generalizing to data recorded by real sensors.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNplj0FLAzEUhHPxINUf4Mn8gV2TbJLdHLVstVD04OJ1ee17oaHdpKRr1X_vWr0JAwMzw8DH2I0UpW6MEXeQP8OpVFKqUriqdpesa-MW4oaQLzIMxCEib08Ux-IBjlP6Gob3PYwp_2veAlLiyzhSPqRpElLkzzR-pLy7Yhce9ke6_vMZ6xZtN38qVi-Py_n9qgBbu6IyVeNxTUpPssoZAq2V9GTRydqjRkKEtbCNs-hBgFEWrNsoB1o4ENWM3f7enrH6Qw4D5K_-B68_41XfTxFK9Q</recordid><startdate>20211217</startdate><enddate>20211217</enddate><creator>Radomski, Adam</creator><creator>Georgiou, Andreas</creator><creator>Debrunner, Thomas</creator><creator>Li, Chenghan</creator><creator>Longinotti, Luca</creator><creator>Seo, Minwon</creator><creator>Kwak, Moosung</creator><creator>Shin, Chang-Woo</creator><creator>Park, Paul K. J</creator><creator>Ryu, Hyunsurk Eric</creator><creator>Eng, Kynan</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20211217</creationdate><title>Enhanced Frame and Event-Based Simulator and Event-Based Video Interpolation Network</title><author>Radomski, Adam ; Georgiou, Andreas ; Debrunner, Thomas ; Li, Chenghan ; Longinotti, Luca ; Seo, Minwon ; Kwak, Moosung ; Shin, Chang-Woo ; Park, Paul K. J ; Ryu, Hyunsurk Eric ; Eng, Kynan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a679-3538fdbe24e246295ea4421fe6d917fd4deddab06896dfa0a526a69c29a409a03</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Radomski, Adam</creatorcontrib><creatorcontrib>Georgiou, Andreas</creatorcontrib><creatorcontrib>Debrunner, Thomas</creatorcontrib><creatorcontrib>Li, Chenghan</creatorcontrib><creatorcontrib>Longinotti, Luca</creatorcontrib><creatorcontrib>Seo, Minwon</creatorcontrib><creatorcontrib>Kwak, Moosung</creatorcontrib><creatorcontrib>Shin, Chang-Woo</creatorcontrib><creatorcontrib>Park, Paul K. J</creatorcontrib><creatorcontrib>Ryu, Hyunsurk Eric</creatorcontrib><creatorcontrib>Eng, Kynan</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Radomski, Adam</au><au>Georgiou, Andreas</au><au>Debrunner, Thomas</au><au>Li, Chenghan</au><au>Longinotti, Luca</au><au>Seo, Minwon</au><au>Kwak, Moosung</au><au>Shin, Chang-Woo</au><au>Park, Paul K. J</au><au>Ryu, Hyunsurk Eric</au><au>Eng, Kynan</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Enhanced Frame and Event-Based Simulator and Event-Based Video Interpolation Network</atitle><date>2021-12-17</date><risdate>2021</risdate><abstract>Fast neuromorphic event-based vision sensors (Dynamic Vision Sensor, DVS) can be combined with slower conventional frame-based sensors to enable higher-quality inter-frame interpolation than traditional methods relying on fixed motion approximations using e.g. optical flow. In this work we present a new, advanced event simulator that can produce realistic scenes recorded by a camera rig with an arbitrary number of sensors located at fixed offsets. It includes a new configurable frame-based image sensor model with realistic image quality reduction effects, and an extended DVS model with more accurate characteristics. We use our simulator to train a novel reconstruction model designed for end-to-end reconstruction of high-fps video. Unlike previously published methods, our method does not require the frame and DVS cameras to have the same optics, positions, or camera resolutions. It is also not limited to objects a fixed distance from the sensor. We show that data generated by our simulator can be used to train our new model, leading to reconstructed images on public datasets of equivalent or better quality than the state of the art. We also show our sensor generalizing to data recorded by real sensors.</abstract><doi>10.48550/arxiv.2112.09379</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2112.09379
ispartof
issn
language eng
recordid cdi_arxiv_primary_2112_09379
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
title Enhanced Frame and Event-Based Simulator and Event-Based Video Interpolation Network
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-08T20%3A01%3A26IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Enhanced%20Frame%20and%20Event-Based%20Simulator%20and%20Event-Based%20Video%20Interpolation%20Network&rft.au=Radomski,%20Adam&rft.date=2021-12-17&rft_id=info:doi/10.48550/arxiv.2112.09379&rft_dat=%3Carxiv_GOX%3E2112_09379%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true