In this paper, we consider the problem of making restaurantserving robots to learn more about the human and to plan and act more interactively. To resolve this problem, it is inevitable to understand what situation the customer is in. In order to understand the customer’s situation automatically, we suggest sensing the behavioral signal of the customer and using the data to predict the customer’s situation. Here, we propose a machine learning algorithm for modeling the customer’s behavioral pattern while having dinner. First of all, we collect the behavioral data from customer using two kinds of wearable devices, an eye tracker and a watch type EDA device, while having dinner. Furthermore we show a novel algorithm which can analyze the data efficiently and extract the individual behavioral patterns. The suggested model has a hierarchical structure: the bottom layer combines the multi-modal behavioral data based on causal structure of the data and extracts the feature vector. Using the extracted feature vectors, the upper layer predicts the customer’s situations based on the temporal correlation between feature vectors. Experimental results show that the suggested model can analyze the behavioral data efficiently and predict the current situation of the customer.
[1]
Konrad Paul Kording,et al.
Causal Inference in Multisensory Perception
,
2007,
PloS one.
[2]
Richard Reviewer-Granger.
Unified Theories of Cognition
,
1991,
Journal of Cognitive Neuroscience.
[3]
Byoung-Tak Zhang,et al.
Ontogenesis of Agency in Machines : A Multidisciplinary Review
,
2014
.
[4]
David Bade,et al.
Cognitive systems
,
2008,
J. Documentation.
[5]
Byoung-Tak Zhang,et al.
Information-Theoretic Objective Functions for Lifelong Learning
,
2013,
AAAI Spring Symposium: Lifelong Machine Learning.
[6]
Rajesh P. N. Rao,et al.
Embodiment is the foundation, not a level
,
1996,
Behavioral and Brain Sciences.