Meetings and meeting modeling in smart environments

In this paper we survey our research on smart meeting rooms and its relevance for augmented reality meeting support and virtual reality generation of meetings in real time or off-line. The research reported here forms part of the European 5th and 6th framework programme projects multi-modal meeting manager (M4) and augmented multi-party interaction (AMI). Both projects aim at building a smart meeting environment that is able to collect multimodal captures of the activities and discussions in a meeting room, with the aim to use this information as input to tools that allow real-time support, browsing, retrieval and summarization of meetings. Our aim is to research (semantic) representations of what takes place during meetings in order to allow generation, e.g. in virtual reality, of meeting activities (discussions, presentations, voting, etc.). Being able to do so also allows us to look at tools that provide support during a meeting and at tools that allow those not able to be physically present during a meeting to take part in a virtual way. This may lead to situations where the differences between real meeting participants, human-controlled virtual participants and (semi-) autonomous virtual participants disappear.

[1]  Clarence A. Ellis,et al.  The neem dream , 2003, TAPIA '03.

[2]  Natasa Jovanovic,et al.  Recognition of meeting actions using information obtained from different modalities -a semantic approach- , 2003 .

[3]  Anton Nijholt,et al.  Where computers disappear, virtual humans appear , 2004, Comput. Graph..

[4]  Andreas Stolcke,et al.  The Meeting Project at ICSI , 2001, HLT.

[5]  Ehl Emile Aarts,et al.  Algorithms in ambient intelligence , 2004 .

[6]  Thomas Rist,et al.  Lost in ambient intelligence? , 2004, CHI EA '04.

[7]  Gerhard Rigoll,et al.  Action Recognition in Meeting Scenarios using Global Motion Features , 2003 .

[8]  R. Bales Social Interaction Systems: Theory and Measurement , 1999 .

[9]  Mohan M. Trivedi,et al.  Activity monitoring and summarization for an intelligent meeting room , 2000, Proceedings Workshop on Human Motion.

[10]  Justine Cassell,et al.  BodyChat: autonomous communicative behaviors in avatars , 1998, AGENTS '98.

[11]  David R. Traum,et al.  Embodied agents for multi-party dialogue in immersive virtual worlds , 2002, AAMAS '02.

[12]  R.P.H. Vertegaal,et al.  Look who's talking to whom : mediating joint attention in multiparty communication and collaboration , 1998 .

[13]  Norman I. Badler,et al.  Design of a Virtual Human Presenter , 2000, IEEE Computer Graphics and Applications.

[14]  Samy Bengio,et al.  Modeling human interaction in meetings , 2003, 2003 IEEE International Conference on Acoustics, Speech, and Signal Processing, 2003. Proceedings. (ICASSP '03)..

[15]  Chris Barker,et al.  An Experiment on Public Speaking Anxiety in Response to Three Different Types of Virtual Audience , 2002, Presence: Teleoperators & Virtual Environments.

[16]  Hagen Soltau,et al.  The ISL Meeting Room System , 2001 .

[17]  Antinus Nijholt,et al.  Multimodality and Ambient Intelligence , 2004 .

[18]  H. Kyburg Theory and measurement , 1984 .

[19]  Emile H. L. Aarts,et al.  Ambient intelligence : first European Symposium, EUSAI 2003, Veldhoven, the Netherlands, November 3-4, 2003 : proceedings , 2003 .

[20]  Mohan M. Trivedi,et al.  Active Camera Networks and Semantic Event Databases for Intelligent Environments , 2002 .

[21]  Antinus Nijholt Towards virtual communities on the Web: Actors and audience , 2000 .

[22]  Justine Cassell,et al.  Avatar-augmented online conversation , 2003 .

[23]  E. Goffman Behavior in public places : notes on the social organization of gatherings , 1964 .

[24]  Rainer Stiefelhagen,et al.  Tracking focus of attention in meetings , 2002, Proceedings. Fourth IEEE International Conference on Multimodal Interfaces.

[25]  Anton Nijholt,et al.  A generic architecture and dialogue model for multimodal interaction , 2003 .