VizWear-Active: Face Detection, Tracking and Registration for Augmented Memory

In this paper, we discuss a wearable assistant with a wearable active camera. This system called VizWear-Active can actively and robustly understand the wearer and his or her environment by camera control according to image processing and motion sensors. We propose face tracking for the VizWear-Active based on the ConDensation algorithm, which realizes real-time tracking by client-server distributed sampling. Furthermore, the tracking algorithm becomes stable against camera motion using motion sensors. We confirmed the face tracking in experiments by implementation on a prototype system for the VizWear-Active.

[1]  Michael Isard,et al.  ICONDENSATION: Unifying Low-Level and High-Level Tracking in a Stochastic Framework , 1998, ECCV.

[2]  Takeshi Shakunaga,et al.  Integration of eigentemplate and structure matching for automatic facial feature detection , 1998, Proceedings Third IEEE International Conference on Automatic Face and Gesture Recognition.

[3]  Jonny Farringdon,et al.  Visual Augmented Memory (VAM) , 2000, Digest of Papers. Fourth International Symposium on Wearable Computers.

[4]  Katsuhiko Sakaue,et al.  VizWear: Toward Human-Centered Interaction through Wearable Vision and Visualization , 2001, IEEE Pacific Rim Conference on Multimedia.

[5]  M. Turk,et al.  Eigenfaces for Recognition , 1991, Journal of Cognitive Neuroscience.

[6]  David W. Murray,et al.  Wearable visual robots , 2000, Digest of Papers. Fourth International Symposium on Wearable Computers.