Synchronization of Images from Multiple Cameras to Reconstruct a Moving Human

What level of synchronization is necessary between images from multiple cameras in order to realistically reconstruct a moving human in 3D? Live reconstruction of the human form, from cameras surrounding the subject, could bridge the gap between video conferencing and Immersive Collaborative Virtual...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Moore, Carl, Duckworth, Toby, Aspin, Rob, Roberts, David
Format: Tagungsbericht
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:What level of synchronization is necessary between images from multiple cameras in order to realistically reconstruct a moving human in 3D? Live reconstruction of the human form, from cameras surrounding the subject, could bridge the gap between video conferencing and Immersive Collaborative Virtual Environments (ICVEs). Video conferencing faithfully reproduces what someone looks like whereas ICVE faithfully reproduces what they look at. While 3D video has been demonstrated in tele-immersion prototypes, the visual/temporal quality has been way below what has become acceptable in video conferencing. Managed synchronization of the acquisition stage is universally used today to ensure multiple images feeding the reconstruction algorithm were taken at the same time. However, this inevitably increases latency and jitter. We measure the temporal characteristics of the capture stage and the impact of inconsistency on the reconstruction algorithm this feeds. This gives us both input and output characteristics for synchronization. From this we determine whether frame synchronization of multiple camera video streams actually needs to be delivered for 3D reconstruction, and if not what level of temporal divergence is acceptable across the captured image frames.
ISSN:1550-6525
DOI:10.1109/DS-RT.2010.15