Smarter Presentations: Exploiting Homography in Camera-Projector Systems

Standard presentation systems consisting of a laptop connected to a projector suffer from two problems: (1) the projected image appears distorted (keystoned) unless the projector is precisely aligned to the projection screen; (2) the speaker is forced to interact with the computer rather than the audience. This paper shows how the addition of an uncalibrated camera, aimed at the screen, solves both problems. Although the locations, orientations and optical parameters of the camera and projector are unknown, the projector-camera system calibrates itself by exploiting the homography between the projected slide and the camera image. Significant improvements are possible over passively calibrating systems since the projector actively manipulates the environment by placing feature points into the scene. For instance, using a low-resolution (160x120) camera, we can achieve an accuracy of ±3 pixels in a 1024x768 presentation slide. The camera-projector system infers models for the projector-to-camera and projector-to-screen mappings in order to provide two major benefits. First, images sent to the projector are pre-warped in such a way that the distortions induced by the arbitrary projector-screen geometry are precisely negated. This enables projectors to be mounted anywhere in the environment – for instance, at the side of the room, where the speaker is less likely to cast shadows on the screen, and where the projector does not occlude the audience’s view. Second, the system detects the position of the user’s laser pointer dot in the camera image at 20Hz, allowing the laser pointer to emulate the pointing actions of a mouse. This enables the user to activate virtual buttons in the presentation (such as “next slide”) and draw on the projected image. The camera-assisted presentation system requires no special hardware (aside from the cheap camera) Rahul Sukthankar (rahuls@cs.cmu.edu) is now affiliated with Compaq Cambridge Research Lab and Carnegie Mellon; Robert Stockton and Matthew Mullin ({rstock,mmullin}@whizbang.com) are now with WhizBang! Labs. and runs on a standard laptop as a Java application. It is now used by the authors for all of their conference presentations.

[1]  James L. Crowley,et al.  MagicBoard: A contribution to an intelligent office environment , 2001, Robotics Auton. Syst..

[2]  Dana H. Ballard,et al.  Computer Vision , 1982 .

[3]  William Buxton,et al.  Large Displays in Automotive Design , 2000, IEEE Computer Graphics and Applications.

[4]  R. Y. Tsai,et al.  An Efficient and Accurate Camera Calibration Technique for 3D Machine Vision , 1986, CVPR 1986.

[5]  W. Brent Seales,et al.  Multi-projector displays using camera-based registration , 1999, Proceedings Visualization '99 (Cat. No.99CB37067).