Arbitrary view position and direction rendering for large-scale scenes

This paper presents a new method for rendering views, especially those of large-scale scenes, such as broad city landscapes. The main contribution of our method is that we are able to easily render any view from an arbitrary point to an arbitrary direction on the ground in a virtual environment. Our...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Takahashi, T., Kawasaki, H., Ikeuchi, K., Sakauchi, M.
Format: Tagungsbericht
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:This paper presents a new method for rendering views, especially those of large-scale scenes, such as broad city landscapes. The main contribution of our method is that we are able to easily render any view from an arbitrary point to an arbitrary direction on the ground in a virtual environment. Our method belongs to the family of work that employs plenoptic functions; however, unlike other works of this type, this particular method allows us to render a novel view from almost any point on the plane at which images are taken. Previous methods, on the other hand, have some restraints concerning their re-constructable area. Thus, when synthesizing a large-scale virtual environment such as a city, our method has a great advantage. One of the applications of our method is a driving simulator in the ITS domain. We can generate any view on any lane on the road from images taken by running along just one lane. Our method, using an omni-directional camera or a measuring device of a similar type, first captures panoramic images by running along a straight line, recording the capturing position of each image. When rendering, the method divides the stored panoramic images into vertical slits, selects some suitable ones based on our theory, and reassembles them for generating an image. The method can make a virtual city with walk-through capabilities. In that virtual city, people can move and look rather freely. In this paper, we describe the basic theory of a new plenoptic function, analyze the applicable areas of the theory and the characteristics of generated images, and demonstrate a complete working system using both indoor and outdoor scenes.
ISSN:1063-6919
DOI:10.1109/CVPR.2000.854815