A layered interpretation of human interactions captured by ubiquitous sensors

We are developing the technology for an interaction corpus, a huge collection of human interaction data captured by various sensors with their machine-readable indices, in order to freely record various episodes in almost all parts of our daily life. To develop such a corpus, we have prototyped ubiquitous/wearable sensor systems that collaboratively capture human interactions from multiple points of view. The purpose of this study is to develop a systematic framework in which various applications can deal with human contexts represented as machine-readable indices. This is done in a uniform manner by explicitly separating the acquired raw data from various sensors from application semantics. This makes it possible to bridge the gaps among the context levels of data required by various applications and to capture human interactions in various situations. This paper proposes a layered model for human interaction interpretations based on a bottom-up approach. In this model, interpretations of human interactions are hierarchically abstracted so that each layer has unique semantic/syntactic information represented by machine-readable indices. We illustrate the use of our architecture through three sample applications each of which provides persons with rich opportunities for sharing their own experiences with others at a poster exhibition site. Moreover, we demonstrate the potential applicability and versatility of our approach by extending our system to another domain, a meeting situation.