Human-humanoid interaction by an intentional system

The paper describes a framework for developing of an intentional vision system oriented to human-humanoid interaction. Such system will be able to recognize user faces, to recognize and tracking human postures by visual perception. The described framework is organized on two modules mapped on the corresponding outputs to obtain: intentional perception of faces; intentional perception of human body movements. Moreover a possible integration of intentional vision module in a complete cognitive architecture is proposed, and knowledge management and reasoning is allowed by a suitable OWL-DL ontology. In particular, the ontological knowledge approach is employed for human behaviour and expression comprehension while stored user habits are used for building a semantically meaningful structure for perceiving the human wills. A semantic description of user wills is formulated in terms of the symbolic features produced by the intentional vision system. The sequences of symbolic features belonging to a domain specific ontology are employed to infer human wills, and to perform suitable actions.

[1]  Daniel Thalmann,et al.  Emotional face expression profiles supported by virtual human ontology , 2006, Comput. Animat. Virtual Worlds.

[2]  Matthew Turk,et al.  Computer vision in the interface , 2004, CACM.

[3]  Thomas R. Gruber,et al.  The Role of Common Ontology in Achieving Sharable, Reusable Knowledge Bases , 1991, KR.

[4]  Illah R. Nourbakhsh,et al.  The role of expressiveness and attention in human-robot interaction , 2002, Proceedings 2002 IEEE International Conference on Robotics and Automation (Cat. No.02CH37292).

[5]  Ignazio Infantino,et al.  Implementation of an Intentional Vision System to Support Cognitive Architectures , 2016, VISAPP.

[6]  David W. Franklin,et al.  Human-humanoid interaction: is a humanoid robot perceived as a human? , 2004, Humanoids.

[7]  Paul A. Viola,et al.  Robust Real-Time Face Detection , 2001, Proceedings Eighth IEEE International Conference on Computer Vision. ICCV 2001.

[8]  Yoshiaki Shirai,et al.  Combining observations of intentional and unintentional behaviors for human-computer interaction , 1999, CHI '99.

[9]  Cynthia Breazeal,et al.  Designing sociable robots , 2002 .

[10]  Ignazio Infantino,et al.  People Tracking and Posture Recognition for Human-Robot Interaction , 2006 .

[11]  Nicola Guarino,et al.  Formal ontology, conceptual analysis and knowledge representation , 1995, Int. J. Hum. Comput. Stud..

[12]  Rajesh P. N. Rao,et al.  Imitation and Social Learning in Robots, Humans and Animals: A Bayesian model of imitation in infants and robots , 2007 .

[13]  Yee Whye Teh,et al.  Names and faces in the news , 2004, CVPR 2004.

[14]  Kostas Karpouzis,et al.  Emotional face expression profiles supported by virtual human ontology: Research Articles , 2006 .

[15]  Alex Pentland,et al.  Face recognition using eigenfaces , 1991, Proceedings. 1991 IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[16]  Rama Chellappa,et al.  Visual tracking and recognition using appearance-adaptive models in particle filters , 2004, IEEE Transactions on Image Processing.

[17]  D. Mitchell Wilkes,et al.  ISAC: Foundations in Human-Humanoid Interaction , 2000, IEEE Intell. Syst..

[18]  Arnaud Revel,et al.  Imitation and Social Learning in Robots, Humans and Animals: How to build an imitator , 2007 .

[19]  Thomas B. Moeslund,et al.  A Survey of Computer Vision-Based Human Motion Capture , 2001, Comput. Vis. Image Underst..

[20]  Sung-Hyon Myaeng,et al.  Integrating Robot Task Scripts with a Cognitive Architecture for Cognitive Human-Robot Interactions , 2007, 2007 IEEE International Conference on Information Reuse and Integration.

[21]  Masahiro Fujita,et al.  An ethological and emotional basis for human-robot interaction , 2003, Robotics Auton. Syst..