Quynh's
Den
|
Real-Time
Lighting
Changes
for
Image-Based
Rendering
|
In this project, we developed a technique for varying the shading and the shadows of objects that are drawn based on a collection of pre-rendered images. To our knowledge this is the first instance of an image-based rendering approach that provides such changes in appearance due to lighting in real time. As with other image-based techniques, our methods are insensitive to object complexity. We make changes in lighting using texture hardware, so we can achieve real-time display rates and can freely intermix image-based models with traditional polygonal models. Applications include vehicle simulation, building walkthroughs and video games.
There are many ways to store samples from images, each with their own tradeoff between simplicity and speed versus generality of viewing. We emphasize speed. We store our image samples in an image fan, a collection of rendered images of an object taken from a circle of camera positions. This representation is not new, and can be found in systems such as Quicktime VR [Chen 95]. An object is rendered by displaying an image from the fan on a quadrilateral whose plane is most nearly perpendicular to the camera's line of sight. Using an alpha channel allows the background to show through where the object is absent. Our techniques could be used with other sets of images such as a collection taken from a hemisphere surrounding an object.
We accomplish lighting-invariant rendering by saving normal vector information at each pixel of each image in the fan. This is called the normal image. The vectors are converted into 8-bit values which index into a color map table for a texture. The color map table essentially stores the luminance values to be associated with each of the normal vectors for a given light and view position. These luminance values are used as blend percentages when alpha-blending the foreground texture image with a black background quadrilateral used to shade the object. We change the shading of a diffuse surface by changing the entries in the color map table rather than performing per-pixel image calculations.
The image fan provides a silhouette of the model from a circle of camera positions. Intuitively, a shadow is a silhouette from the viewpoint of the light source that is then placed on the ground plane. Such a simple placement is, however, insufficient for a realistic view because portions of the bottom edge of the shadow may be disconnected from the bottom edge of the object. Our solution is to divide the silhouette into strips, allowing each column of a silhouette to be placed independently. Each shadow strip is rendered as a translucent quadrilateral on the ground plane. The coordinates of the shadow can be calculated by using the technique of re-drawing objects vertically scaled by zero [Blinn 88].