Visual and Inertial Data-Based Virtual Localization for Urban Combat

The present work of investigation presents a system of estimation of position and orientation based on algorithms of artificial vision and inertial data taken from the unit of inertial measurement incorporated in a smartphone device. The implemented system realizes the estimation of position and orientation in real time. An application was developed for android operating systems that allows capturing the images of the environment and executes the algorithms of artificial vision. In the implementation of the system, the detectors of feature points were tested, Harris, Shi-Tomasi, FAST and SIFT, with the objective of finding the detector that allows to have an optimized system so that it can be executed by the processor of a system embedded as are smartphones. To calculate the displacement of the camera adhered to a mobile agent, the optical flow method was implemented. Additionally, gyroscope data incorporated in the smartphone was used to estimate the orientation of the agent. The system incorporates a simulation of estimated movement within a three-dimensional environment that runs on a computer. The position and orientation data are sent from the smartphone to the computer wirelessly through a Wi-Fi connection. The three-dimensional environment is a digital version of the central block of the Universidad de la Fuerzas Armadas ESPE where the tests of the implemented system were carried out.

[1]  AnguloCecilio,et al.  Real-Time Model-Based Video Stabilization for Microaerial Vehicles , 2016 .

[2]  Wilbert G. Aguilar,et al.  Developing of a Video-Based Model for UAV Autonomous Navigation , 2017 .

[3]  Wilbert G. Aguilar,et al.  Statistical Abnormal Crowd Behavior Detection and Simulation for Real-Time Applications , 2017, ICIRA.

[4]  Wilbert Geovanny Aguilar Castillo,et al.  Obstacle Avoidance Based-Visual Navigation for Micro Aerial Vehicles , 2017 .

[5]  Wilbert G. Aguilar,et al.  Math Model of UAV Multi Rotor Prototype with Fixed Wing Aerodynamic Structure for a Flight Simulator , 2017, AVR.

[6]  Richard Szeliski,et al.  Computer Vision - Algorithms and Applications , 2011, Texts in Computer Science.

[7]  Janne Heikkilä,et al.  Camera-Based Motion Recognition for Mobile Interaction , 2011 .

[8]  M. Bevilacqua,et al.  Egomotion estimation for monocular camera visual odometer , 2016, 2016 IEEE International Instrumentation and Measurement Technology Conference Proceedings.

[9]  Cecilio Angulo,et al.  Real-Time Model-Based Video Stabilization for Microaerial Vehicles , 2015, Neural Processing Letters.

[10]  Wilbert G. Aguilar,et al.  3D Environment Mapping Using the Kinect V2 and Path Planning Based on RRT Algorithms , 2016 .

[11]  Takeo Kato,et al.  Vehicle Ego-Motion Estimation and Moving Object Detection using a Monocular Camera , 2006, 18th International Conference on Pattern Recognition (ICPR'06).

[12]  Cecilio Angulo,et al.  Real-time video stabilization without phantom movements for micro aerial vehicles , 2014, EURASIP J. Image Video Process..

[13]  Wilbert G. Aguilar,et al.  Robust Motion Estimation Based on Multiple Monocular Camera for Indoor Autonomous Navigation of Micro Aerial Vehicle , 2018, AVR.

[14]  B. Hofmann-Wellenhof,et al.  Global Positioning System , 1992 .

[15]  Wilbert G. Aguilar,et al.  SVM and RGB-D Sensor Based Gesture Recognition for UAV Control , 2018, AVR.

[16]  Klaus Wehrle,et al.  Indoor navigation on wheels (and on foot) using smartphones , 2012, 2012 International Conference on Indoor Positioning and Indoor Navigation (IPIN).