An Affect-Responsive Interactive Photo Frame

We develop an interactive photo-frame system in which a series of videos of a single person are automatically segmented and a response logic is derived to interact with the user in real-time. The system is composed of five modules. The first module analyzes the uploaded videos and prepares segments for interactive play, in an offline manner. The second module uses multi-modal input (activity levels, facial expression, etc.) to generate a user state. These states are used by the internal frame logic, the third module, to select segments from the offline-generated segment dictionary, and they determine the response of the system. A continuous video stream is synthesized from the prepared segments in accordance with the modeled state of the user. The system logic includes online/offline adaptation, which is based on stored input-output pairs during real-time operation, and offline learning to improve the system response. The fourth module is the application interface, which deals with handling the input and output streams. Finally, a dual-frame module is described to enhance the use of the system.

[1]  Takeo Kanade,et al.  An Iterative Image Registration Technique with an Application to Stereo Vision , 1981, IJCAI.

[2]  Fred L. Bookstein,et al.  Principal Warps: Thin-Plate Splines and the Decomposition of Deformations , 1989, IEEE Trans. Pattern Anal. Mach. Intell..

[3]  Carlo Tomasi,et al.  Good features to track , 1994, 1994 Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.

[4]  Thomas S. Huang,et al.  Connected vibrations: a modal analysis approach for non-rigid motion tracking , 1998, Proceedings. 1998 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No.98CB36231).

[5]  J.-Y. Bouguet,et al.  Pyramidal implementation of the lucas kanade feature tracker , 1999 .

[6]  Maja Pantic,et al.  Automatic Analysis of Facial Expressions: The State of the Art , 2000, IEEE Trans. Pattern Anal. Mach. Intell..

[7]  Beat Fasel,et al.  Recognition of asymmetric facial action unit activities and intensities , 2000, Proceedings 15th International Conference on Pattern Recognition. ICPR-2000.

[8]  Takeo Kanade,et al.  Comprehensive database for facial expression analysis , 2000, Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580).

[9]  Paul A. Viola,et al.  Rapid object detection using a boosted cascade of simple features , 2001, Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001.

[10]  Rainer Lienhart,et al.  An extended set of Haar-like features for rapid object detection , 2002, Proceedings. International Conference on Image Processing.

[11]  Nicu Sebe,et al.  Authentic facial expression analysis , 2004, Sixth IEEE International Conference on Automatic Face and Gesture Recognition, 2004. Proceedings..

[12]  Peter Robinson,et al.  Real-Time Inference of Complex Mental States from Facial Expressions and Head Gestures , 2004, 2004 Conference on Computer Vision and Pattern Recognition Workshop.

[13]  B. S. Manjunath,et al.  A Mathematical Comparison of Point Detectors , 2004, 2004 Conference on Computer Vision and Pattern Recognition Workshop.

[14]  Panos Markopoulos,et al.  The PhotoMirror appliance: affective awareness in the hallway , 2006, Personal and Ubiquitous Computing.

[15]  Takeo Kanade,et al.  Facial Expression Analysis , 2011, AMFG.

[16]  Anton Nijholt,et al.  Toward Affective Dialogue Modeling using Partially Observable Markov Decision Processes , 2006 .

[17]  S. Agamanolis Beyond Communication: Human Connectedness as a Research Agenda , 2006 .

[18]  N. Sebe,et al.  Facial Expression Recognition: A Fully Integrated Approach , 2007, 14th International Conference of Image Analysis and Processing - Workshops (ICIAPW 2007).

[19]  Theo Gevers,et al.  Accurate eye center location and tracking using isophote curvature , 2008, 2008 IEEE Conference on Computer Vision and Pattern Recognition.

[20]  Maria E. Jabon,et al.  Real-time classification of evoked emotions using facial feature tracking and physiological responses , 2008, Int. J. Hum. Comput. Stud..

[21]  Björn W. Schuller,et al.  A demonstration of audiovisual sensitive artificial listeners , 2009, 2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops.

[22]  Richard Bowden,et al.  Real-time motion control using pose space probability density estimation , 2009, 2009 IEEE 12th International Conference on Computer Vision Workshops, ICCV Workshops.

[23]  Matei Mancas,et al.  MORFACE: FACE MORPHING , 2009 .

[24]  Marc Schröder,et al.  The SEMAINE API: Towards a Standards-Based Framework for Building Emotion-Oriented Systems , 2010, Adv. Hum. Comput. Interact..

[25]  Fernando De la Torre,et al.  Facial Expression Analysis , 2011, Visual Analysis of Humans.