Egocentric videoconferencing

We introduce a method for egocentric videoconferencing that enables hands-free video calls, for instance by people wearing smart glasses or other mixed-reality devices. Videoconferencing portrays valuable non-verbal communication and face expression cues, but usually requires a front-facing camera....

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:ACM transactions on graphics 2020-11, Vol.39 (6), p.1-16, Article 268
Hauptverfasser: Elgharib, Mohamed, Mendiratta, Mohit, Thies, Justus, Niessner, Matthias, Seidel, Hans-Peter, Tewari, Ayush, Golyanik, Vladislav, Theobalt, Christian
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 16
container_issue 6
container_start_page 1
container_title ACM transactions on graphics
container_volume 39
creator Elgharib, Mohamed
Mendiratta, Mohit
Thies, Justus
Niessner, Matthias
Seidel, Hans-Peter
Tewari, Ayush
Golyanik, Vladislav
Theobalt, Christian
description We introduce a method for egocentric videoconferencing that enables hands-free video calls, for instance by people wearing smart glasses or other mixed-reality devices. Videoconferencing portrays valuable non-verbal communication and face expression cues, but usually requires a front-facing camera. Using a frontal camera in a hands-free setting when a person is on the move is impractical. Even holding a mobile phone camera in the front of the face while sitting for a long duration is not convenient. To overcome these issues, we propose a low-cost wearable egocentric camera setup that can be integrated into smart glasses. Our goal is to mimic a classical video call, and therefore, we transform the egocentric perspective of this camera into a front facing video. To this end, we employ a conditional generative adversarial neural network that learns a transition from the highly distorted egocentric views to frontal views common in videoconferencing. Our approach learns to transfer expression details directly from the egocentric view without using a complex intermediate parametric expressions model, as it is used by related face reenactment methods. We successfully handle subtle expressions, not easily captured by parametric blendshape-based solutions, e.g., tongue movement, eye movements, eye blinking, strong expressions and depth varying movements. To get control over the rigid head movements in the target view, we condition the generator on synthetic renderings of a moving neutral face. This allows us to synthesis results at different head poses. Our technique produces temporally smooth video-realistic renderings in real-time using a video-to-video translation network in conjunction with a temporal discriminator. We demonstrate the improved capabilities of our technique by comparing against related state-of-the art approaches.
doi_str_mv 10.1145/3414685.3417808
format Article
fullrecord <record><control><sourceid>acm_cross</sourceid><recordid>TN_cdi_crossref_primary_10_1145_3414685_3417808</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3417808</sourcerecordid><originalsourceid>FETCH-LOGICAL-a301t-45fbef429ca4e742e3fbe6775418dcc38be417e6adf2b73bd97d5327014d64cc3</originalsourceid><addsrcrecordid>eNo9j0tLAzEUhYMoOFbXgrjwD6S9mdw8upRSH1Bw065DJrkpI3ZGkiL47410dHU43HMO92PsVsBcCFQLiQK1VfOqxoI9Y41QynAjtT1nDRgJHCSIS3ZVyjsAaETdsPv1fgw0HHMfHr76SGMYh0SZhtAP-2t2kfxHoZtJZ2z3tN6uXvjm7fl19bjhvg4eOarUUcJ2GTySwZZk9doYhcLGEKTtqP5E2sfUdkZ2cWmikq0BgVFjDczY4rQb8lhKpuQ-c3_w-dsJcL9wboJzE1xt3J0aPhz-w3_HH1c6SVM</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Egocentric videoconferencing</title><source>ACM Digital Library Complete</source><creator>Elgharib, Mohamed ; Mendiratta, Mohit ; Thies, Justus ; Niessner, Matthias ; Seidel, Hans-Peter ; Tewari, Ayush ; Golyanik, Vladislav ; Theobalt, Christian</creator><creatorcontrib>Elgharib, Mohamed ; Mendiratta, Mohit ; Thies, Justus ; Niessner, Matthias ; Seidel, Hans-Peter ; Tewari, Ayush ; Golyanik, Vladislav ; Theobalt, Christian</creatorcontrib><description>We introduce a method for egocentric videoconferencing that enables hands-free video calls, for instance by people wearing smart glasses or other mixed-reality devices. Videoconferencing portrays valuable non-verbal communication and face expression cues, but usually requires a front-facing camera. Using a frontal camera in a hands-free setting when a person is on the move is impractical. Even holding a mobile phone camera in the front of the face while sitting for a long duration is not convenient. To overcome these issues, we propose a low-cost wearable egocentric camera setup that can be integrated into smart glasses. Our goal is to mimic a classical video call, and therefore, we transform the egocentric perspective of this camera into a front facing video. To this end, we employ a conditional generative adversarial neural network that learns a transition from the highly distorted egocentric views to frontal views common in videoconferencing. Our approach learns to transfer expression details directly from the egocentric view without using a complex intermediate parametric expressions model, as it is used by related face reenactment methods. We successfully handle subtle expressions, not easily captured by parametric blendshape-based solutions, e.g., tongue movement, eye movements, eye blinking, strong expressions and depth varying movements. To get control over the rigid head movements in the target view, we condition the generator on synthetic renderings of a moving neutral face. This allows us to synthesis results at different head poses. Our technique produces temporally smooth video-realistic renderings in real-time using a video-to-video translation network in conjunction with a temporal discriminator. We demonstrate the improved capabilities of our technique by comparing against related state-of-the art approaches.</description><identifier>ISSN: 0730-0301</identifier><identifier>EISSN: 1557-7368</identifier><identifier>DOI: 10.1145/3414685.3417808</identifier><language>eng</language><publisher>New York, NY, USA: ACM</publisher><subject>Animation ; Computer graphics ; Computing methodologies ; Image manipulation ; Rendering</subject><ispartof>ACM transactions on graphics, 2020-11, Vol.39 (6), p.1-16, Article 268</ispartof><rights>Owner/Author</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-a301t-45fbef429ca4e742e3fbe6775418dcc38be417e6adf2b73bd97d5327014d64cc3</citedby><cites>FETCH-LOGICAL-a301t-45fbef429ca4e742e3fbe6775418dcc38be417e6adf2b73bd97d5327014d64cc3</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://dl.acm.org/doi/pdf/10.1145/3414685.3417808$$EPDF$$P50$$Gacm$$Hfree_for_read</linktopdf><link.rule.ids>314,780,784,2282,27924,27925,40196,76228</link.rule.ids></links><search><creatorcontrib>Elgharib, Mohamed</creatorcontrib><creatorcontrib>Mendiratta, Mohit</creatorcontrib><creatorcontrib>Thies, Justus</creatorcontrib><creatorcontrib>Niessner, Matthias</creatorcontrib><creatorcontrib>Seidel, Hans-Peter</creatorcontrib><creatorcontrib>Tewari, Ayush</creatorcontrib><creatorcontrib>Golyanik, Vladislav</creatorcontrib><creatorcontrib>Theobalt, Christian</creatorcontrib><title>Egocentric videoconferencing</title><title>ACM transactions on graphics</title><addtitle>ACM TOG</addtitle><description>We introduce a method for egocentric videoconferencing that enables hands-free video calls, for instance by people wearing smart glasses or other mixed-reality devices. Videoconferencing portrays valuable non-verbal communication and face expression cues, but usually requires a front-facing camera. Using a frontal camera in a hands-free setting when a person is on the move is impractical. Even holding a mobile phone camera in the front of the face while sitting for a long duration is not convenient. To overcome these issues, we propose a low-cost wearable egocentric camera setup that can be integrated into smart glasses. Our goal is to mimic a classical video call, and therefore, we transform the egocentric perspective of this camera into a front facing video. To this end, we employ a conditional generative adversarial neural network that learns a transition from the highly distorted egocentric views to frontal views common in videoconferencing. Our approach learns to transfer expression details directly from the egocentric view without using a complex intermediate parametric expressions model, as it is used by related face reenactment methods. We successfully handle subtle expressions, not easily captured by parametric blendshape-based solutions, e.g., tongue movement, eye movements, eye blinking, strong expressions and depth varying movements. To get control over the rigid head movements in the target view, we condition the generator on synthetic renderings of a moving neutral face. This allows us to synthesis results at different head poses. Our technique produces temporally smooth video-realistic renderings in real-time using a video-to-video translation network in conjunction with a temporal discriminator. We demonstrate the improved capabilities of our technique by comparing against related state-of-the art approaches.</description><subject>Animation</subject><subject>Computer graphics</subject><subject>Computing methodologies</subject><subject>Image manipulation</subject><subject>Rendering</subject><issn>0730-0301</issn><issn>1557-7368</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><recordid>eNo9j0tLAzEUhYMoOFbXgrjwD6S9mdw8upRSH1Bw065DJrkpI3ZGkiL47410dHU43HMO92PsVsBcCFQLiQK1VfOqxoI9Y41QynAjtT1nDRgJHCSIS3ZVyjsAaETdsPv1fgw0HHMfHr76SGMYh0SZhtAP-2t2kfxHoZtJZ2z3tN6uXvjm7fl19bjhvg4eOarUUcJ2GTySwZZk9doYhcLGEKTtqP5E2sfUdkZ2cWmikq0BgVFjDczY4rQb8lhKpuQ-c3_w-dsJcL9wboJzE1xt3J0aPhz-w3_HH1c6SVM</recordid><startdate>20201126</startdate><enddate>20201126</enddate><creator>Elgharib, Mohamed</creator><creator>Mendiratta, Mohit</creator><creator>Thies, Justus</creator><creator>Niessner, Matthias</creator><creator>Seidel, Hans-Peter</creator><creator>Tewari, Ayush</creator><creator>Golyanik, Vladislav</creator><creator>Theobalt, Christian</creator><general>ACM</general><scope>AAYXX</scope><scope>CITATION</scope></search><sort><creationdate>20201126</creationdate><title>Egocentric videoconferencing</title><author>Elgharib, Mohamed ; Mendiratta, Mohit ; Thies, Justus ; Niessner, Matthias ; Seidel, Hans-Peter ; Tewari, Ayush ; Golyanik, Vladislav ; Theobalt, Christian</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a301t-45fbef429ca4e742e3fbe6775418dcc38be417e6adf2b73bd97d5327014d64cc3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Animation</topic><topic>Computer graphics</topic><topic>Computing methodologies</topic><topic>Image manipulation</topic><topic>Rendering</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Elgharib, Mohamed</creatorcontrib><creatorcontrib>Mendiratta, Mohit</creatorcontrib><creatorcontrib>Thies, Justus</creatorcontrib><creatorcontrib>Niessner, Matthias</creatorcontrib><creatorcontrib>Seidel, Hans-Peter</creatorcontrib><creatorcontrib>Tewari, Ayush</creatorcontrib><creatorcontrib>Golyanik, Vladislav</creatorcontrib><creatorcontrib>Theobalt, Christian</creatorcontrib><collection>CrossRef</collection><jtitle>ACM transactions on graphics</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Elgharib, Mohamed</au><au>Mendiratta, Mohit</au><au>Thies, Justus</au><au>Niessner, Matthias</au><au>Seidel, Hans-Peter</au><au>Tewari, Ayush</au><au>Golyanik, Vladislav</au><au>Theobalt, Christian</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Egocentric videoconferencing</atitle><jtitle>ACM transactions on graphics</jtitle><stitle>ACM TOG</stitle><date>2020-11-26</date><risdate>2020</risdate><volume>39</volume><issue>6</issue><spage>1</spage><epage>16</epage><pages>1-16</pages><artnum>268</artnum><issn>0730-0301</issn><eissn>1557-7368</eissn><abstract>We introduce a method for egocentric videoconferencing that enables hands-free video calls, for instance by people wearing smart glasses or other mixed-reality devices. Videoconferencing portrays valuable non-verbal communication and face expression cues, but usually requires a front-facing camera. Using a frontal camera in a hands-free setting when a person is on the move is impractical. Even holding a mobile phone camera in the front of the face while sitting for a long duration is not convenient. To overcome these issues, we propose a low-cost wearable egocentric camera setup that can be integrated into smart glasses. Our goal is to mimic a classical video call, and therefore, we transform the egocentric perspective of this camera into a front facing video. To this end, we employ a conditional generative adversarial neural network that learns a transition from the highly distorted egocentric views to frontal views common in videoconferencing. Our approach learns to transfer expression details directly from the egocentric view without using a complex intermediate parametric expressions model, as it is used by related face reenactment methods. We successfully handle subtle expressions, not easily captured by parametric blendshape-based solutions, e.g., tongue movement, eye movements, eye blinking, strong expressions and depth varying movements. To get control over the rigid head movements in the target view, we condition the generator on synthetic renderings of a moving neutral face. This allows us to synthesis results at different head poses. Our technique produces temporally smooth video-realistic renderings in real-time using a video-to-video translation network in conjunction with a temporal discriminator. We demonstrate the improved capabilities of our technique by comparing against related state-of-the art approaches.</abstract><cop>New York, NY, USA</cop><pub>ACM</pub><doi>10.1145/3414685.3417808</doi><tpages>16</tpages><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 0730-0301
ispartof ACM transactions on graphics, 2020-11, Vol.39 (6), p.1-16, Article 268
issn 0730-0301
1557-7368
language eng
recordid cdi_crossref_primary_10_1145_3414685_3417808
source ACM Digital Library Complete
subjects Animation
Computer graphics
Computing methodologies
Image manipulation
Rendering
title Egocentric videoconferencing
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-27T15%3A49%3A23IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-acm_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Egocentric%20videoconferencing&rft.jtitle=ACM%20transactions%20on%20graphics&rft.au=Elgharib,%20Mohamed&rft.date=2020-11-26&rft.volume=39&rft.issue=6&rft.spage=1&rft.epage=16&rft.pages=1-16&rft.artnum=268&rft.issn=0730-0301&rft.eissn=1557-7368&rft_id=info:doi/10.1145/3414685.3417808&rft_dat=%3Cacm_cross%3E3417808%3C/acm_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true