Editorial for Journal of Field Robotics - Special Issue on Machine Learning Based Robotics in Unstructured Environments
暂无分享,去创建一个
Building autonomous robots which operate safely and effectively in dynamic and unstructured environments remains an important open research area. Changing, natural environments defy the creation of accurate models and enforce a strong dependence on immediate sensor data. However the complexity of these environments, uncertainty in the interpretation and integration of sensor readings, and uncertainty in how actions will affect environment and state make programming robust behavior for these robots a continuing challenge. Indeed, the goal of autonomous robots functioning robustly in outdoor environments they have not previously encountered remains elusive. Completion of the Defense Advanced Research Projects Agency DARPA Grand Challenge was an exciting step toward this goal, but competitors still required extensive use of well chosen global positioning satellite way points, sometimes only a few meters apart. Successful navigation between way points a few hundred meters apart in unfamiliar dynamic outdoor environments is still a key research goal in robotics. This special issue is motivated by a workshop on “Machine Learning Based Robotics in Unstructured Environments,” which took place at the 2005 Neural Information Processing Systems Conference. The goal of this workshop was to reformulate robotics in unstructured environments within the theoretical framework of machine learning. Although this goal may seem lofty, it does have a sound foundation and promises substantial potential benefits to robotics. Machine learning algorithms offer a principled approach to dealing with uncertainty in sensing, computation, and action. The foundations of machine learning hold the potential for a new theoretical framework for addressing the pervasive uncertainty that plagues field robotics. The eight papers in this special issue represent a growing movement in robotics to apply machine learning to subproblems that cannot be effectively modeled by hand—or more specifically, where current theoretical models are inadequate for the task. These papers represent a wide range of approaches, from the use of simple machine learning algorithms to Bayesian approaches and sophisticated manifold-based learning techniques. Two basic strategies for obtaining learning data for robot tasks are demonstrated in this collection: Learning from example where a human operator demonstrates the task and the robot uses the resulting data to learn the sensor-action mapping; and learning from experience or online learning where the robot uses its own local experience to improve its performance. Unlike most standard learning data, these sensor-action traces are typically spatially and temporally extended, posing unique challenges for the techniques applied. The specific subtasks addressed include learning the sensor appearance of locally traversable terrain in order to extrapolate it to long range sensor data for long range planning, using examples to model operator performance and optimize model parameters, and using examples to identify distinguished places in a topological map. The first four papers in the special issue are motivated by the DARPA Learning Applied to Ground Robots LAGR program. The objective of this program is to develop a machine learning approach to vision-based navigation in off-road outdoor environments. The motivation is based on the observation that humans can easily identify long range traversable terrain in an image; whereas, to date we have not been able to program robots to robustly accomplish this. The first paper, titled “The DARPA LAGR Pro• • • • • • • • • • • • • • • • •