The Convergence of Graphics and Imaging

Over twenty years ago a SIGGRAPH panel session addressed the convergence of computer graphics and image processing. At that time the emphasis was on low‐level operations such as filtering to avoid anti‐aliasing, and related psycho‐physics issues. More recently, Graphics and Imaging are converging at a higher level as we move toward blending the synthetic world of computer‐generated images with the real world of computer‐captured images. In this talk we describe several research directions that relate to this convergence, and illustrate with specific examples of work at MERL – A Mitsubishi Electric Research Laboratory. These research directions are: Analyzing images of the human face to determine identity and orientation and ultimately to reconstruct the shape of the face. Reconstruction of static and dynamic 3D geometries from 2D images separated in time or space: here the objective is to take multiple images of a real‐world scene and recreate the 3D geometry of the scene. If objects in the scene are moving, then the objective is extracting the dynamic geometry. Once the geometry has been reconstructed, editing and relighting of the scene becomes possible. Display of 3D scalar fields (also known as volume graphics) concerns 3D as opposed to 2D images, such as CT and MRI scans. These scans can be thought of as 3D images in that they are point samples of a 3D scalar field, just as a computer‐captured image is a point sample of a 2D sample field. The objective of volume graphics is to create and display the 3D geometries that underly 3D images. An inexpensive yet real‐time (30 fps for a 256 x 256 x 256 image) implementation of Pfister and Kaufman’s Cube‐4 rendering architecture will be described.