Scalable technologies for distributed multimedia systems

The two main operating constraints of today's multimedia servers are the storage bandwidth and communication bandwidth limitations. Many efforts have been made to efficiently utilize these capacities in streaming multimedia data to users. The common objective is to improve service latency and system throughput. In this dissertation, I present several scalable techniques for cost-effective Video-on-Demand (VoD) systems, using videos as media objects in general multimedia systems. Since video playback bears time-synchronous nature, continuous delivery of a video stream requires reserving a storage-I/O stream and an isochronous channel for jitter freedom. The minimal number of storage-I/O streams and network channels essentially determines the server capacity. Dedicating a stream for each viewer, however, will quickly exhaust the server capacity. Subsequent users will experience long waits. Therefore, the strategy to facilitate stream sharing among users is very important for the system to scale beyond the hard limitations of the media server. To address the storage bandwidth limitation, I investigate an efficient buffer management technique to cache I/O streams based on the skew between pairs of successive I/O streams. The smaller the skew, the higher priority the streams to be buffered. Moreover, I extend this knowledge to resolve the network-I/O constraint using respective Pull and Push approaches as follows: (1) On-demand multicast . When a server stream becomes available, the server selects the requests for the same video as a batch to multicast according to some scheduling policy. All the users can tune to the specific multicast channel for the same service, sharing the multicast stream. (2) Periodic broadcast . The server actively broadcasts a new stream every time interval for each video to provide the guaranteed service latency. Any user can render the current multicast stream right away, while it is also possible to prefetch the other shared streams for latter usage. Both are intensively delved in my study. Particularly, for each service model, I present a novel approach to optimize the server network-I/O's efficiency by elegantly pipelining video streams through clients' disk buffers. Thus, the server can employ a single server stream to service a large population of clients, yielding significant performance improvements over many conventional techniques. Besides the simulation results and mathematical analyses, I also demonstrate the superiority of one such technique by system implementation. The experiments from this research prototype substantiate that many users can be served instantly without compromising the excellent quality of individual playback.