Model-based vision for car following
暂无分享,去创建一个
This paper describes a vision processing algorithm that supports autonomous car following. The algorithm visually tracks the position of a `lead vehicle' from the vantage of a pursuing `chase vehicle.' The algorithm requires a 2-D model of the back of the lead vehicle. This model is composed of line segments corresponding to features that give rise to strong edges. There are seven sequential stages of computation: (1) Extracting edge points; (2) Associating extracted edge points with the model features; (3) Determining the position of each model feature; (4) Determining the model position; (5) Updating the motion model of the object; (6) Predicting the position of the object in next image; (7) Predicting the location of all object features from prediction of object position. All processing is confined to the 2-D image plane. The 2-D model location computed in this processing is used to determine the position of the lead vehicle with respect to a 3-D coordinate frame affixed to the chase vehicle. This algorithm has been used as part of a complete system to drive an autonomous vehicle, a High Mobility Multipurpose Wheeled Vehicle (HMMWV) such that it follows a lead vehicle at speeds up to 35 km/hr. The algorithm runs at an update rate of 15 Hertz and has a worst case computational delay of 128 ms. The algorithm is implemented under the NASA/NBS Standard Reference Model for Telerobotic Control System Architecture (NASREM) and runs on a dedicated vision processing engine and a VME-based multiprocessor system.
[1] Roger Bostelman,et al. High-level mobility controller for a remotely operated unmanned land vehicle , 1992, J. Intell. Robotic Syst..
[2] Azriel Rosenfeld,et al. Digital Picture Processing, Volume 1 , 1982 .
[3] K. N. Murphy. Navigation and Retro-Traverse on a Remotely Operated Vehicle , 1992, Singapore International Conference on Intelligent Control and Instrumentation [Proceedings 1992].
[4] J Fiala,et al. Note on NASREM Implementation , 1990 .