Summarizing wearable video

"We want to record our entire life by video" is the motivation of this research. Developing wearable devices and huge storage devices will make it possible to keep entire life by video. We could capture 70 years of our life, however, the problem is how to handle such a huge amount of data. Automatic summarization based on personal interest should be required. In this paper we propose an approach to the automatic structuring and summarization of wearable video. (Wearable video is our abbreviation of "video captured by a wearable camera".) In our approach, we make use of a wearable camera and a sensor of brain waves. The video is firstly structured by objective features of video, and the shots are rated by subjective measures based on brain waves. The approach is very successful for real world experiments and it automatically extracted all the events that the subjects reported they had felt interesting.

[1]  Marcel Worring,et al.  Content-Based Image Retrieval at the End of the Early Years , 2000, IEEE Trans. Pattern Anal. Mach. Intell..

[2]  S. Ditlea The PC goes ready-to-wear , 2000 .

[3]  Jennifer Healey,et al.  StartleCam: a cybernetic wearable camera , 1998, Digest of Papers. Second International Symposium on Wearable Computers (Cat. No.98EX215).

[4]  Michael A. Smith,et al.  Video skimming and characterization through the combination of image and language understanding techniques , 1997, Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition.