Visual control of mobile robots
暂无分享,去创建一个
The autonomous navigation of mobile robots is a key problem in the robotics community that has attracted enormous research efforts for many years. Steady progress has been made in this area and, in recent years, we are witnessing the impressive development of totally autonomous vehicles able to drive in real environments. This is a qualitative advance that undoubtedly will produce a tremendous impact in our daily life. The interest in systems capable of performing efficient and robust autonomous navigation lies in the many potential applications in industrial as well as domestic settings. Among the variety of sensors available today, vision systems stand out because they provide very rich information at a low cost. Thus, the common denominator of any flexible and versatile autonomous navigation system is the integration of vision in the control loop. However, the versatility of vision systems comes at the cost of higher data processing complexity. Visual control or visual servoing has been one of the major research issues in robotics for more than four decades. In general terms, the basic idea of visual servoing is to stabilize the position of a robot to a desired location by regulating to zero an error term which is estimated using information extracted from images (current and target). This idea, also called homing, is similar to the ability of insects such as bees, ants, and wasps to return to specific places by storing a snapshot at the target location and later estimating the direction to it from their current position. In the framework of visual control, the methods are generally classified as image-based, if the image data is used directly in the control loop, position-based, if the image data is used to compute pose parameters, and hybrid or partitioned, if a combination of the two previous types is used. However, this classification no longer captures the diversity and particularities of the different strategies that have been investigated in the visual control literature. In this context, venues such as workshops or journal special issues are necessary in order to extend the state of the art on this topic. Within the framework of autonomous navigation, the integration of vision in the control loop is still an open and ambitious research area. Besides, visual control is a multidisciplinary field of research that requires the collaboration of the computer vision and robot control communities. However, there is still a gap between these communities that hinders the achievement of more profitable results from joint research. Thus, one of the goals of this Special Issue is to bring together research works that fill this gap, presenting the latest advances in the field and disseminating their results to the scientific community. 2. The ViCoMoR workshop