Neural Face Video Compression using Multiple Views
Recent advances in deep generative models led to the development of neural face video compression codecs that use an order of magnitude less bandwidth than engineered codecs. These neural codecs reconstruct the current frame by warping a source frame and using a generative model to compensate for im...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2022-04 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Volokitin, Anna Brugger, Stefan Benlalah, Ali Martin, Sebastian Amberg, Brian Tschannen, Michael |
description | Recent advances in deep generative models led to the development of neural face video compression codecs that use an order of magnitude less bandwidth than engineered codecs. These neural codecs reconstruct the current frame by warping a source frame and using a generative model to compensate for imperfections in the warped source frame. Thereby, the warp is encoded and transmitted using a small number of keypoints rather than a dense flow field, which leads to massive savings compared to traditional codecs. However, by relying on a single source frame only, these methods lead to inaccurate reconstructions (e.g. one side of the head becomes unoccluded when turning the head and has to be synthesized). Here, we aim to tackle this issue by relying on multiple source frames (views of the face) and present encouraging results. |
format | Article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2650100661</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2650100661</sourcerecordid><originalsourceid>FETCH-proquest_journals_26501006613</originalsourceid><addsrcrecordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mQw8kstLUrMUXBLTE5VCMtMSc1XcM7PLShKLS7OzM9TKC3OzEtX8C3NKcksyAEpSC0v5mFgTUvMKU7lhdLcDMpuriHOHroFRfmFpanFJfFZ-aVFeUCpeCMzUwNDAwMzM0Nj4lQBAL5lM4w</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2650100661</pqid></control><display><type>article</type><title>Neural Face Video Compression using Multiple Views</title><source>Free E- Journals</source><creator>Volokitin, Anna ; Brugger, Stefan ; Benlalah, Ali ; Martin, Sebastian ; Amberg, Brian ; Tschannen, Michael</creator><creatorcontrib>Volokitin, Anna ; Brugger, Stefan ; Benlalah, Ali ; Martin, Sebastian ; Amberg, Brian ; Tschannen, Michael</creatorcontrib><description>Recent advances in deep generative models led to the development of neural face video compression codecs that use an order of magnitude less bandwidth than engineered codecs. These neural codecs reconstruct the current frame by warping a source frame and using a generative model to compensate for imperfections in the warped source frame. Thereby, the warp is encoded and transmitted using a small number of keypoints rather than a dense flow field, which leads to massive savings compared to traditional codecs. However, by relying on a single source frame only, these methods lead to inaccurate reconstructions (e.g. one side of the head becomes unoccluded when turning the head and has to be synthesized). Here, we aim to tackle this issue by relying on multiple source frames (views of the face) and present encouraging results.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Codec ; Video compression</subject><ispartof>arXiv.org, 2022-04</ispartof><rights>2022. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>780,784</link.rule.ids></links><search><creatorcontrib>Volokitin, Anna</creatorcontrib><creatorcontrib>Brugger, Stefan</creatorcontrib><creatorcontrib>Benlalah, Ali</creatorcontrib><creatorcontrib>Martin, Sebastian</creatorcontrib><creatorcontrib>Amberg, Brian</creatorcontrib><creatorcontrib>Tschannen, Michael</creatorcontrib><title>Neural Face Video Compression using Multiple Views</title><title>arXiv.org</title><description>Recent advances in deep generative models led to the development of neural face video compression codecs that use an order of magnitude less bandwidth than engineered codecs. These neural codecs reconstruct the current frame by warping a source frame and using a generative model to compensate for imperfections in the warped source frame. Thereby, the warp is encoded and transmitted using a small number of keypoints rather than a dense flow field, which leads to massive savings compared to traditional codecs. However, by relying on a single source frame only, these methods lead to inaccurate reconstructions (e.g. one side of the head becomes unoccluded when turning the head and has to be synthesized). Here, we aim to tackle this issue by relying on multiple source frames (views of the face) and present encouraging results.</description><subject>Codec</subject><subject>Video compression</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mQw8kstLUrMUXBLTE5VCMtMSc1XcM7PLShKLS7OzM9TKC3OzEtX8C3NKcksyAEpSC0v5mFgTUvMKU7lhdLcDMpuriHOHroFRfmFpanFJfFZ-aVFeUCpeCMzUwNDAwMzM0Nj4lQBAL5lM4w</recordid><startdate>20220413</startdate><enddate>20220413</enddate><creator>Volokitin, Anna</creator><creator>Brugger, Stefan</creator><creator>Benlalah, Ali</creator><creator>Martin, Sebastian</creator><creator>Amberg, Brian</creator><creator>Tschannen, Michael</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20220413</creationdate><title>Neural Face Video Compression using Multiple Views</title><author>Volokitin, Anna ; Brugger, Stefan ; Benlalah, Ali ; Martin, Sebastian ; Amberg, Brian ; Tschannen, Michael</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_26501006613</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Codec</topic><topic>Video compression</topic><toplevel>online_resources</toplevel><creatorcontrib>Volokitin, Anna</creatorcontrib><creatorcontrib>Brugger, Stefan</creatorcontrib><creatorcontrib>Benlalah, Ali</creatorcontrib><creatorcontrib>Martin, Sebastian</creatorcontrib><creatorcontrib>Amberg, Brian</creatorcontrib><creatorcontrib>Tschannen, Michael</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>SciTech Premium Collection (Proquest) (PQ_SDU_P3)</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Volokitin, Anna</au><au>Brugger, Stefan</au><au>Benlalah, Ali</au><au>Martin, Sebastian</au><au>Amberg, Brian</au><au>Tschannen, Michael</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Neural Face Video Compression using Multiple Views</atitle><jtitle>arXiv.org</jtitle><date>2022-04-13</date><risdate>2022</risdate><eissn>2331-8422</eissn><abstract>Recent advances in deep generative models led to the development of neural face video compression codecs that use an order of magnitude less bandwidth than engineered codecs. These neural codecs reconstruct the current frame by warping a source frame and using a generative model to compensate for imperfections in the warped source frame. Thereby, the warp is encoded and transmitted using a small number of keypoints rather than a dense flow field, which leads to massive savings compared to traditional codecs. However, by relying on a single source frame only, these methods lead to inaccurate reconstructions (e.g. one side of the head becomes unoccluded when turning the head and has to be synthesized). Here, we aim to tackle this issue by relying on multiple source frames (views of the face) and present encouraging results.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2022-04 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2650100661 |
source | Free E- Journals |
subjects | Codec Video compression |
title | Neural Face Video Compression using Multiple Views |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-07T19%3A20%3A47IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Neural%20Face%20Video%20Compression%20using%20Multiple%20Views&rft.jtitle=arXiv.org&rft.au=Volokitin,%20Anna&rft.date=2022-04-13&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2650100661%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2650100661&rft_id=info:pmid/&rfr_iscdi=true |