Indoor Localisation Through Object Detection on Real-Time Video Implementing a Single Wearable Camera

This paper presents an accurate indoor localisation approach to provide context aware support for Activities of Daily Living. This paper explores the use of contemporary wearable technology (Google Glass) to facilitate a unique first-person view of the occupants environment. Machine vision techniques are then employed to determine an occupant’s location via environmental object detection within their field of view. Specifically, the video footage is streamed to a server where object recognition is performed using the Oriented Features from Accelerated Segment Test and Rotated Binary Robust Independent Elementary Features algorithm with a K-Nearest Neighbour matcher to match the saved keypoints of the objects to the scene. To validate the approach, an experimental set-up consisting of three ADL routines, each containing at least ten activities, ranging from drinking water to making a meal were considered. Ground truth was obtained from manually annotated video data and the approach was subsequently benchmarked against a common method of indoor localisation that employs dense sensor placement. The paper presents the results from these experiments, which highlight the feasibility of using off-the-shelf machine vision algorithms to determine indoor location based on data input from wearable video-based sensor technology. The results show a recall, precision, and F-measure of 0.82, 0.96, and 0.88 respectively. This method provides additional secondary benefits such as first person tracking within the environment and lack of required sensor interaction to determine occupant location.

[1]  Gaetano Borriello,et al.  Location Systems for Ubiquitous Computing , 2001, Computer.

[2]  Mark Fiala,et al.  Designing Highly Reliable Fiducial Markers , 2010, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[3]  Hanqing Lu,et al.  Fast and Accurate Image Matching with Cascade Hashing for 3D Reconstruction , 2014, 2014 IEEE Conference on Computer Vision and Pattern Recognition.

[4]  Jane Wardle,et al.  Internet use, social engagement and health literacy decline during ageing in a longitudinal cohort of older English adults , 2014, Journal of Epidemiology & Community Health.

[5]  C. Nugent,et al.  Experiences in the development of a Smart Lab , 2009 .

[6]  Zhen Wang,et al.  Draining our glass: an energy and heat characterization of Google Glass , 2014, APSys.

[7]  Massimo Mecella,et al.  PLaTHEA: a marker‐less people localization and tracking system for home automation , 2015, Softw. Pract. Exp..

[8]  Emil C. Lupu,et al.  Associating locations from wearable cameras , 2014, BMVC.

[9]  Dah-Jye Lee,et al.  Seeing Eye Phone: a smart phone-based indoor localization and guidance system for the visually impaired , 2013, Machine Vision and Applications.

[10]  Gary R. Bradski,et al.  ORB: An efficient alternative to SIFT or SURF , 2011, 2011 International Conference on Computer Vision.

[11]  Jeffrey,et al.  Location systems for ubiquitous computing - Computer , 2001 .

[12]  Mahadev Satyanarayanan,et al.  Towards wearable cognitive assistance , 2014, MobiSys.

[13]  Paul A. Viola,et al.  Robust Real-Time Face Detection , 2001, Proceedings Eighth IEEE International Conference on Computer Vision. ICCV 2001.

[14]  Paul L. Rosin Measuring Corner Properties , 1999, Comput. Vis. Image Underst..

[15]  Philippe Mabilleau,et al.  Location Estimation in a Smart Home: System Implementation and Evaluation Using Experimental Data , 2008, International journal of telemedicine and applications.

[16]  Liming Chen,et al.  An Agent-mediated Ontology-based Approach for Composite Activity Recognition in Smart Homes , 2013, J. Univers. Comput. Sci..

[17]  Yang Xiao,et al.  A Survey of Insulin-Dependent Diabetes—Part I: Therapies and Devices , 2008, International journal of telemedicine and applications.

[18]  Deepika Verma,et al.  Comparison of Brute-Force and K-D Tree Algorithm , 2014 .