Automatic camera control in virtual environments augmented using multiple sparse videos

Automated virtual camera control has been widely used in animation and interactive virtual environments. We have developed a multiple sparse camera based free view video system prototype that allows users to control the position and orientation of a virtual camera, enabling the observation of a real...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Computers & graphics 2011-04, Vol.35 (2), p.412-421
Hauptverfasser: Silva, Jeferson R., Santos, Thiago T., Morimoto, Carlos H.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Automated virtual camera control has been widely used in animation and interactive virtual environments. We have developed a multiple sparse camera based free view video system prototype that allows users to control the position and orientation of a virtual camera, enabling the observation of a real scene in three dimensions (3D) from any desired viewpoint. Automatic camera control can be activated to follow selected objects by the user. Our method combines a simple geometric model of the scene composed of planes (virtual environment), augmented with visual information from the cameras and pre-computed tracking information of moving targets to generate novel perspective corrected 3D views of the virtual camera and moving objects. To achieve real-time rendering performance, view-dependent textured mapped billboards are used to render the moving objects at their correct locations and foreground masks are used to remove the moving objects from the projected video streams. The current prototype runs on a PC with a common graphics card and can generate virtual 2D views from three cameras of resolution 768×576 with several moving objects at about 11fps. [Display omitted] ► Real-time rendering of an interactive augmented virtual environment for surveillance. ► Segmentation and tracking of moving objects using sparse cameras and the homographic constraint. ► Correction of color artifacts caused by cameras with significant color differences using simulated annealing. ► Generation of perspective corrected views of moving objects using pre-computed segmentation and tracking information, using projective texture mapping on billboards. ► Automatic virtual camera control to generate videos of selected subjects (3rd view) or display the scene viewed by a subject (1st person view), to facilitate monitoring and surveillance tasks.
ISSN:0097-8493
1873-7684
DOI:10.1016/j.cag.2011.01.012