NeuralPassthrough: Learned Real-Time View Synthesis for VR
Virtual reality (VR) headsets provide an immersive, stereoscopic visual experience, but at the cost of blocking users from directly observing their physical environment. Passthrough techniques are intended to address this limitation by leveraging outward-facing cameras to reconstruct the images that...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Virtual reality (VR) headsets provide an immersive, stereoscopic visual
experience, but at the cost of blocking users from directly observing their
physical environment. Passthrough techniques are intended to address this
limitation by leveraging outward-facing cameras to reconstruct the images that
would otherwise be seen by the user without the headset. This is inherently a
real-time view synthesis challenge, since passthrough cameras cannot be
physically co-located with the eyes. Existing passthrough techniques suffer
from distracting reconstruction artifacts, largely due to the lack of accurate
depth information (especially for near-field and disoccluded objects), and also
exhibit limited image quality (e.g., being low resolution and monochromatic).
In this paper, we propose the first learned passthrough method and assess its
performance using a custom VR headset that contains a stereo pair of RGB
cameras. Through both simulations and experiments, we demonstrate that our
learned passthrough method delivers superior image quality compared to
state-of-the-art methods, while meeting strict VR requirements for real-time,
perspective-correct stereoscopic view synthesis over a wide field of view for
desktop-connected headsets. |
---|---|
DOI: | 10.48550/arxiv.2207.02186 |