Feasibility of identifying eating moments from first-person images leveraging human computation

There is widespread agreement in the medical research community that more effective mechanisms for dietary assessment and food journaling are needed to fight back against obesity and other nutrition-related diseases. However, it is presently not possible to automatically capture and objectively assess an individual's eating behavior. Currently used dietary assessment and journaling approaches have several limitations; they pose a significant burden on individuals and are often not detailed or accurate enough. In this paper, we describe an approach where we leverage human computation to identify eating moments in first-person point-of-view images taken with wearable cameras. Recognizing eating moments is a key first step both in terms of automating dietary assessment and building systems that help individuals reflect on their diet. In a feasibility study with 5 participants over 3 days, where 17,575 images were collected in total, our method was able to recognize eating moments with 89.68% accuracy.

[1]  David A. Forsyth,et al.  Utility data annotation with Amazon Mechanical Turk , 2008, 2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops.

[2]  Krzysztof Z. Gajos,et al.  Platemate: crowdsourcing nutritional analysis from food photographs , 2011, UIST.

[3]  Jon Froehlich,et al.  Combining crowdsourcing and google street view to identify street-level accessibility problems , 2013, CHI.

[4]  K. Michels,et al.  Nutritional epidemiology--past, present, future. , 2003, International journal of epidemiology.

[5]  Manuel Blum,et al.  Peekaboom: a game for locating objects in images , 2006, CHI.

[6]  Jindong Liu,et al.  An Intelligent Food-Intake Monitoring System Using Wearable Sensors , 2012, 2012 Ninth International Conference on Wearable and Implantable Body Sensor Networks.

[7]  Michael S. Bernstein,et al.  Soylent: a word processor with a crowd inside , 2010, UIST.

[8]  Laura A. Dabbish,et al.  Labeling images with a computer game , 2004, AAAI Spring Symposium: Knowledge Collection from Volunteer Contributors.

[9]  Walter S. Lasecki,et al.  Training Activity Recognition Systems Online Using Real-time Crowdsourcing , 2012 .

[10]  Gary Hsieh,et al.  Using Low-Cost Sensing to Support Nutritional Awareness , 2002, UbiComp.

[11]  Deborah Estrin,et al.  Image browsing, processing, and clustering for participatory sensing: lessons from a DietSense prototype , 2007, EmNets '07.

[12]  David R. Jacobs,et al.  Challenges in Research in Nutritional Epidemiology , 2012 .

[13]  E Stellar,et al.  Chews and swallows and the microstructure of eating. , 1985, The American journal of clinical nutrition.

[14]  Mingui Sun,et al.  A wearable electronic system for objective dietary assessment. , 2010, Journal of the American Dietetic Association.

[15]  Shahram Izadi,et al.  SenseCam: A Retrospective Memory Aid , 2006, UbiComp.

[16]  Steve Hodges,et al.  Can we use digital life-log images to investigate active and sedentary travel behaviour? Results from a pilot study , 2011, The international journal of behavioral nutrition and physical activity.

[17]  Duncan J. Watts,et al.  Financial incentives and the "performance of crowds" , 2009, HCOMP '09.

[18]  Mingui Sun,et al.  Recognizing physical activity from ego-motion of a camera , 2010, 2010 Annual International Conference of the IEEE Engineering in Medicine and Biology.

[19]  Bill Tomlinson,et al.  Who are the crowdworkers?: shifting demographics in mechanical turk , 2010, CHI Extended Abstracts.

[20]  J. Burke,et al.  Feasibility Testing of an Automated Image-Capture Method to Aid Dietary Recall , 2011, European Journal of Clinical Nutrition.

[21]  Scott R. Klemmer,et al.  Proceedings of the 24th annual ACM symposium adjunct on User interface software and technology , 2011, UIST 2011.

[22]  Aniket Kittur,et al.  Crowdsourcing user studies with Mechanical Turk , 2008, CHI.

[23]  Paul Lukowicz,et al.  Analysis of Chewing Sounds for Dietary Monitoring , 2005, UbiComp.

[24]  Antonio Torralba,et al.  LabelMe: A Database and Web-Based Tool for Image Annotation , 2008, International Journal of Computer Vision.

[25]  S. Marshall,et al.  An ethical framework for automated, wearable cameras in health behavior research. , 2013, American journal of preventive medicine.

[26]  Alan F. Smeaton,et al.  The SenseCam as a tool for task observation , 2008, BCS HCI.

[27]  Jie Li,et al.  Designing a wearable computer for lifestyle evaluation , 2012, 2012 38th Annual Northeast Bioengineering Conference (NEBEC).