Accurate and low-delay seeking within and across mash-ups of highly-compressed videos

In typical video mash-up systems, a group of source videos are compiled off-line into a single composite object. This improves rendering performance, but limits the possibilities for dynamic composition of personalized content. This paper discusses systems and network issues for enabling client-side dynamic composition of video mash-ups. In particular, this paper describes a novel algorithm to support accurate, low-delay seamless composition of independent clips. We report on an intelligent application-steered scheme that allows system layers to prefetch and discard predicted frames before the rendering moment of indexed content. This approach unifies application-level quality-of-experience specification with system layer quality-of-service processing. To evaluate our scheme, several experiments are conducted and substantial performance improvements are observed in terms of accuracy and low delay.