From remote media immersion to Distributed Immersive Performance

We present the architecture, technology and experimental applications of a real-time, multi-site, interactive and collaborative environment called Distributed Immersive Performance (DIP). The objective of DIP is to develop the technology for live, interactive musical performances in which the participants - subsets of musicians, the conductor and the audience - are in different physical locations and are interconnected by very high fidelity multichannel audio and video links. DIP is a specific realization of broader immersive technology - the creation of the complete aural and visual ambience that places a person or a group of people in a virtual space where they can experience events occurring at a remote site or communicate naturally regardless of their location. The DIP experimental system has interaction sites and servers in different locations on the USC campus and at several partners, including the New World Symphony of Miami Beach, FL. The sites have different types of equipment to test the effects of video and audio fidelity on the ease of use and functionality for different applications. Many sites have high-definition (HD) video or digital video (DV) quality images projected onto wide screen wall displays completely integrated with an immersive audio reproduction system for a seamless, fully three-dimensional aural environment with the correct spatial sound localization for participants. The system is capable of storage and playback of the many streams of synchronized audio and video data (immersidata), and utilizes novel protocols for the low-latency, seamless, synchronized real-time delivery of immersidata over local area networks and wide-area networks such as Internet2. We discuss several recent interactive experiments using the system and many technical challenges common to the DIP scenario and a broader range of applications. These challenges include: (1). low latency continuous media (CM) stream transmission, synchronization and data loss management; (2). low latency, real-time video and multichannel immersive audio acquisition and rendering; (3). real-time continuous media stream recording, storage, playback; (4). human factors studies: psychophysical, perceptual, artistic, performance evaluation; (5). robust integration of all these technical areas into a seamless presentation to the participants.

[1]  Craig Partridge,et al.  Flow synchronization protocol , 1994, TNET.

[2]  Pavlin Radoslavov,et al.  A comparison of application-level and router-assisted hierarchical schemes for reliable multicast , 2001, IEEE/ACM Transactions on Networking.

[3]  George Varghese,et al.  An error control scheme for large-scale multicast applications , 1998, Proceedings. IEEE INFOCOM '98, the Conference on Computer Communications. Seventeenth Annual Joint Conference of the IEEE Computer and Communications Societies. Gateway to the 21st Century (Cat. No.98.

[4]  GovindanRamesh,et al.  A comparison of application-level and router-assisted hierarchical schemes for reliable multicast , 2004 .

[5]  Chris Kyriakakis,et al.  Loss concealment for multi-channel streaming audio , 2003, NOSSDAV '03.

[6]  Cyrus Shahabi,et al.  Yima: A Second-Generation Continuous Media Server , 2002, Computer.

[7]  Cyrus Shahabi,et al.  Yima : A Second Generation of Continuous Media Servers , .

[8]  Dennis McLeod,et al.  Integrated media systems , 1999 .

[9]  W. G. Gardner,et al.  3-D Audio Using Loudspeakers , 1998 .

[10]  Pavlin Radoslavov,et al.  A framework for incremental deployment strategies for router-assisted services , 2003, IEEE INFOCOM 2003. Twenty-second Annual Joint Conference of the IEEE Computer and Communications Societies (IEEE Cat. No.03CH37428).

[11]  Christopher Small,et al.  Why Doesn't the Whole World Love Chamber Music? , 2001 .

[12]  Cyrus Shahabi,et al.  Retransmission-based error control in a many-to-many client-server environment , 2003, IS&T/SPIE Electronic Imaging.

[13]  Jerry Bauck,et al.  Generalized transaural stereo and applications , 1996 .

[14]  Athanasios Mouchtaris,et al.  Inverse Filter Design for Immersive Audio Rendering Over Loudspeakers , 2000, IEEE Trans. Multim..

[15]  Chris Kyriakakis Fundamental and technological limitations of immersive audio systems , 1998 .

[16]  Jeremy R. Cooperstock,et al.  The challenges of archiving networked‐based multimedia performances (Performance cryogenics) , 2002 .

[17]  Roger Zimmermann,et al.  Design of a Large Scale Data Stream Recorder , 2003, ICEIS.

[18]  Hong Zhu,et al.  Yima: Design and Evaluation of a Streaming Media System for Residential Broadband Services , 2001, Databases in Telecommunications.

[19]  Shahram Ghandeharizadeh,et al.  Performance of Networked XML-Driven Co-Operative Applications , 2004, Concurr. Eng. Res. Appl..