Semi-Videoconference System Using Real-Time Wireless Technologies

We defined a novel system for wireless videoconferencing in this paper. Compared with normal videoconferencing systems, our approach does not need any visual inputs except a neutral image of the user. Our algorithm automatically calculates user expression features on conference server by corresponding voice audio input. These features are transmitted to end users' mobile sets and final expression synthesis can be done there. Since the large visual data is replaced by a small amount of feature data, a great quantity of data bandwidth can be saved, thus improving communication qualities under wireless conditions.