Real-time streaming for collaborative interactions

Fueled by the latest developments in network technologies, computer-assisted collaboration, which is based on multimedia streaming techniques, has grown phenomenally in recent years. Comparing to the traditional collaborative activities that mainly employ image, video, audio, or text to communicate, new media types such as 3D models and haptic (forces) can represent collaborations in a more realistic and comprehensive manner. My dissertation focuses on those new media and explores three important aspects of multimedia streaming. Particularly, the first work investigates how to transfer static 3D models over lossy network conditions without adding extra cost. We develop a receiver-based loss tolerance scheme that is capable of recovering the lost data when streaming 3D progressive meshes over lossy networks, and suggest a linear prediction method to handle loss of structural and geometric data on the client/receiver side. Our approach works without any data retransmission and does not introduce any unnecessary protection bits. The second examines the collaborative haptic interactions with physically-based 3D deformable models over lossy networks. Two questions are studied: one, how to share visual and force feedbacks with remote partners in real time when we interact with physically-based soft body; two, what are the effects of packet loss and delay during the sharing procedure, and how to solve the problem caused by them. New loss compensation and prediction algorithms have been suggested to correct the errors caused by lossy network conditions successfully and provide natural and realistic collaborative user experiences. The third essay analyzes the case wherein 3D surface deformations are streamed in real time. We propose an interactive collaboration framework, adopting a spectrum-based representation of shape deformation. Interestingly, we also allow users with different display capabilities and network conditions to have the same deformations simultaneously. In addition, two applications have been explained based on our studies: interact with virtual deformable models on mobile devices and in an immersive virtual environment.