Using Kinect to Develop a Smart Meeting Room

Observe the most progressive conference system in the market, a higher meeting quality is always being pursued with better equipment, no matter on the personalized laptop, touch pad, direct microphone, or other high quality recording video equipment. It is known that to establish this system with low price is difficult. This paper aims to solve this predicament and contributes a higher quality to the smart meeting room. Hence, we combine Microsoft Kinect and Bluetooth techniques to build an smart conference system with personalized Bluetooth supported equipment to identify each participant's identity, and use these IDs to search the central database in order to retrieve his/her contact information (phone number, email, web storage space and text, or meeting records). Consequently, the system uses Kinect as a gesture recognition device to detect each person's skeletons with multi functions (Controlling the computer, sending information, auto-uploading files, or recording personal meeting records in database). Smart meeting room can be established with simplified equipment in this way. In this smart meeting room, participants don't need to bring additional equipment but only to use his/her Bluetooth supported cell phone. As for the arrangement of the meeting room, besides the central computer, we only need Kinect to provide the gesture recognizing and detecting works, and to record the meeting records (text and video). This simplified smart conference room can provide great benefits for the users, this is the main purpose of this paper.

[1]  Wei Pan,et al.  SoundSense: scalable sound sensing for people-centric applications on mobile phones , 2009, MobiSys '09.

[2]  Amit Singhal,et al.  Bluetooth Enabled Mobile Phone Remote Control for PC , 2009, 2009 International Conference on Advanced Computer Control.

[3]  Andrew W. Fitzgibbon,et al.  Real-time human pose recognition in parts from single depth images , 2011, CVPR 2011.

[4]  Albert A. Rizzo,et al.  FAAST: The Flexible Action and Articulated Skeleton Toolkit , 2011, 2011 IEEE Virtual Reality Conference.

[5]  Jake K. Aggarwal,et al.  Human detection using depth information by Kinect , 2011, CVPR 2011 WORKSHOPS.

[6]  Umesh Chandra,et al.  Mobile phone-to-phone personal context sharing , 2009, 2009 9th International Symposium on Communications and Information Technology.

[7]  Dima Damen,et al.  Recognizing linked events: Searching the space of feasible explanations , 2009, 2009 IEEE Conference on Computer Vision and Pattern Recognition.