Research on User Applying Mode for Video Conference System

Based on analysis of H.323 system, a new method called user applying mode is put forward. In user applying mode, MCU sends users information to all conference endpoints, any conferee has the ability to apply more than one video stream. The applied video streams are multiplexed and transmitted by MCU, and displayed on the endpoint independently. Therefore, the interactivity is improved, and the original size of these video streams is maintained. Test results show that this new mode has a perfect compatibility, and is superior to mixed mode in CPU and memory utilization.

[1]  David J. Fleet,et al.  Performance of optical flow techniques , 1994, International Journal of Computer Vision.

[2]  Henning Schulzrinne,et al.  RTP: A Transport Protocol for Real-Time Applications , 1996, RFC.

[3]  K. P. Karmann,et al.  Moving object recognition using an adaptive background memory , 1990 .

[4]  W. Eric L. Grimson,et al.  Adaptive background mixture models for real-time tracking , 1999, Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149).

[5]  Hironobu Fujiyoshi,et al.  Moving target classification and tracking from real-time video , 1998, Proceedings Fourth IEEE Workshop on Applications of Computer Vision. WACV'98 (Cat. No.98EX201).

[6]  Jörg Ott,et al.  ITU-T Standardization Activities for Interactive Multimedia Communications on Packet-Based Networks: H.323 and Related Recommendations , 1999, Comput. Networks.

[7]  Bian Xue LQN Based Performance Model for MCU Performance Prediction , 2004 .

[8]  Stuart J. Russell,et al.  Image Segmentation in Video Sequences: A Probabilistic Approach , 1997, UAI.

[9]  Osama Masoud,et al.  Detection and classification of vehicles , 2002, IEEE Trans. Intell. Transp. Syst..