Ayumu: Efficient lifelogging with focused tasks

Today’s lifelogging devices capture images periodically without considering what data is important to users. Due to their small form factors and limited battery capacities, these lifeloggers are bound to miss important data either because they record at a slow rate to conserve power, or because they record at such a high rate that they must frequently recharge. In this paper, we present a new approach to lifelogging that better utilizes a device’s battery by integrating knowledge of the specific information that a user wants captured. We have developed the first instance of such a focused-task lifelogging system called Ayumu, which aims to capture the reading material that a user interacts with over the course of a day. Instead of capturing images periodically, Ayumu uses a suite of inexpensive sensors to record only when reading material is present. By recognizing when it would be most beneficial to capture images, Ayumu can achieve superior precision and comparable recall to a conventional, periodic lifelogger while using less energy.

[1]  Kiyoharu Aizawa,et al.  Efficient retrieval of life log based on context and content , 2004, CARPE'04.

[2]  Andrew T. Campbell,et al.  BeWell: Sensing Sleep, Physical Activities and Social Interactions to Promote Wellbeing , 2014, Mobile Networks and Applications.

[3]  References , 1971 .

[4]  Paul A. Viola,et al.  Rapid object detection using a boosted cascade of simple features , 2001, Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001.

[5]  R. A Weale NOW LET ME SEE , 1979, The Lancet.

[6]  Alan F. Smeaton,et al.  Investigating keyframe selection methods in the novel domain of passively captured visual lifelogs , 2008, CIVR '08.

[7]  Paramvir Bahl,et al.  Energy characterization and optimization of image sensing toward continuous mobile vision , 2013, MobiSys '13.

[8]  Friedrich M. Wahl,et al.  Block segmentation and text extraction in mixed text/image documents , 1982, Comput. Graph. Image Process..

[9]  Shih-Fu Chang,et al.  Scene change detection in an MPEG-compressed video sequence , 1995, Electronic Imaging.

[10]  David S. Doermann,et al.  Automatic text detection and tracking in digital video , 2000, IEEE Trans. Image Process..

[11]  Alex Pentland,et al.  Real-Time American Sign Language Recognition Using Desk and Wearable Computer Based Video , 1998, IEEE Trans. Pattern Anal. Mach. Intell..

[12]  Lina J. Karam,et al.  Morphological text extraction from images , 2000, IEEE Trans. Image Process..

[13]  R. Bajcsy,et al.  Wearable Sensors for Reliable Fall Detection , 2005, 2005 IEEE Engineering in Medicine and Biology 27th Annual Conference.

[14]  Tatiana Novikova,et al.  Image Binarization for End-to-End Text Understanding in Natural Images , 2013, 2013 12th International Conference on Document Analysis and Recognition.

[15]  Chitra Dorai,et al.  Automatic text extraction from video for content-based annotation and retrieval , 1998, Proceedings. Fourteenth International Conference on Pattern Recognition (Cat. No.98EX170).

[16]  R. Smith,et al.  An Overview of the Tesseract OCR Engine , 2007, Ninth International Conference on Document Analysis and Recognition (ICDAR 2007).

[17]  Thomas M. Breuel,et al.  Efficient implementation of local adaptive thresholding techniques using integral images , 2008, Electronic Imaging.

[18]  Chunheng Wang,et al.  Text detection in images based on unsupervised classification of edge-based features , 2005, Eighth International Conference on Document Analysis and Recognition (ICDAR'05).

[19]  Kevin Eustice,et al.  Using location lifelogs to make meaning of food and physical activity behaviors , 2013, 2013 7th International Conference on Pervasive Computing Technologies for Healthcare and Workshops.

[20]  Wen Gao,et al.  Fast and robust text detection in images and video frames , 2005, Image Vis. Comput..

[21]  Gerhard Tröster,et al.  Detection of eating and drinking arm gestures using inertial body-worn sensors , 2005, Ninth IEEE International Symposium on Wearable Computers (ISWC'05).

[22]  Alan F. Smeaton,et al.  Passively recognising human activities through lifelogging , 2011, Comput. Hum. Behav..

[23]  Junji Yamato,et al.  Recognizing human action in time-sequential images using hidden Markov model , 1992, Proceedings 1992 IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[24]  Sian Lun Lau,et al.  Supporting patient monitoring using activity recognition with a smartphone , 2010, 2010 7th International Symposium on Wireless Communication Systems.

[25]  Shahram Izadi,et al.  SenseCam: A Retrospective Memory Aid , 2006, UbiComp.

[26]  A.F. Smeaton,et al.  Combining Face Detection and Novelty to Identify Important Events in a Visual Lifelog , 2008, 2008 IEEE 8th International Conference on Computer and Information Technology Workshops.

[27]  Steven Bird,et al.  NLTK: The Natural Language Toolkit , 2002, ACL.

[28]  Phil Blunsom,et al.  Teaching Machines to Read and Comprehend , 2015, NIPS.