Video redundancy detection in rushes collection

The rushes is a collection of raw material videos. There are various redundancies, such as rainbow screen, clipboard shot, white/black view, and unnecessary re-take. This paper develops a set of solutions to remove these video redundancies as well as an effective system for video summarisation. We regard manual editing effects, e.g. clipboard shots, as differentiators in the visual language. A rushes video is therefore divided into a group of subsequences, each of which stands for a re-take instance. A graph matching algorithm is proposed to estimate the similarity between re-takes and suggests the best instance for content presentation. The experiments on the Rushes 2008 collection show that a video can be shortened to 4%-16% of the original size by redundancy detection. This significantly reduces the complexity in content selection and leads to an effective and efficient video summarisation system.