Lifelog retrieval for memory stimulation of people with memory impairment

Abstract Recent advances in lifelogging, mainly due to fast development of wearable cameras, made it possible to continuously capture moments from our life from a first-person point of view. Extracting and re-experimenting moments illustrated by autobiographic images is of special interest in order to stimulate episodic memory of patients with neurodegenerative diseases (Alzheimer, mild cognitive impairment, etc.). Using a wearable camera, it is possible to generate a huge amount of images captured on a daily basis (around 2 , 000 images per day on a 30 s time-lapse mode). Since not all images obtained are valuable and semantically rich, there is a need for efficient and scalable techniques to separate the wheat from the chaff , that is. to extract egocentric images that are semantically rich enough and not redundant in order to use them for memory stimulation. By using state-of-the-art retrieval systems based on convolutional neural network features obtained from these rich, filtered egocentric images, we show how to cope with those requirements and apply the filtered images within a memory stimulation program specially developed to improve memory of patients with Mild Cognitive Impairment.