Accuracy in Fixing Ship's Positions by CCD Camera Survey of Horizontal Angles

Navigator identifies beacons being within his sight visually when conducting maritime coastal navigation. He measures angles and distances in respect to the beacons. Then, based on the measurements’ results, he determines coordinates of the ship position with analytical or graphical methods applied. In general, it may be concluded that, in this process, he acts as a measuring device processing visual signals with information about the beacons coming from nautical publications onto coordinates of the position. So far, attempts at automating navigators’ activities have not been undertaken, however there has been such equipment available as: high-resolution visual cameras – capable of performing identification and measurements, electronic navigational charts – containing information about the beacons in digital form, computers of high processor capacities – allowing processing of visual images in real time. However, from scientific point of view, this new situation implicates interesting questions. Is it possible to elaborate digital methods for automatic identification of beacons based on a sequent of coast’s images and of the electronic navigational chart? What accuracy of determination of the ship’s position coordinates can be obtained from results of measurements of horizontal angles executed with the high-resolution CCD cameras? Answers to these questions may justify purposefulness of further research conduction with a use of optical systems for automation of the maritime coastal navigation performance.

[1]  Robert T. Collins,et al.  Autonomous river navigation , 2004, SPIE Optics East.

[2]  Dominick Andrisani,et al.  Performance of Integrated Electro-Optical Navigation Systems , 2003 .

[3]  Peter Stone,et al.  Selective Visual Attention for Object Detection on a Legged Robot , 2006, RoboCup.

[4]  M. BiancoGiovanni,et al.  Real-time analysis of the robustness of the navigation strategy of a visually guided mobile robot , 2000 .

[5]  Peter Stone,et al.  Practical Vision-Based Monte Carlo Localization on a Legged Robot , 2005, Proceedings of the 2005 IEEE International Conference on Robotics and Automation.

[6]  Sebastian Thrun,et al.  FastSLAM: a factored solution to the simultaneous localization and mapping problem , 2002, AAAI/IAAI.

[7]  Hugh F. Durrant-Whyte,et al.  Simultaneous Localization, Mapping and Moving Object Tracking , 2007, Int. J. Robotics Res..

[8]  Gérard G. Medioni,et al.  3D Reconstruction of Background and Objects Moving on Ground Plane Viewed from a Moving Camera , 2006, 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06).

[9]  Andrew J. Davison,et al.  Real-time simultaneous localisation and mapping with a single camera , 2003, Proceedings Ninth IEEE International Conference on Computer Vision.

[10]  Wolfram Burgard,et al.  Exploration with active loop-closing for FastSLAM , 2004, 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE Cat. No.04CH37566).

[11]  Michel Dhome,et al.  Real Time Localization and 3D Reconstruction , 2006, 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06).

[12]  José Santos-Victor,et al.  Experiments in Visual-Based Navigation with an Omnidirectional Camera , 2001 .

[13]  Selim Benhimane,et al.  A new approach to vision-based robot control with omni-directional cameras , 2006, Proceedings 2006 IEEE International Conference on Robotics and Automation, 2006. ICRA 2006..

[14]  Antti Vehkaoja,et al.  Automatic recognition of sector light boundaries based on digital imaging , 2007 .

[15]  Christian P. Robert,et al.  Monte Carlo Statistical Methods , 2005, Springer Texts in Statistics.