Experience Based Task and Environment Specification for Plan-based Robotic Agents

Planning for autonomous robots differs from classical planning, because actions are not behaving deterministically. In a real world environment there are too many parameters and too many cases to handle before the plan execution. Nevertheless planning is needed to give robots the flexibility to work in unknown or not completely known environments. Plan execution can be seen as a flexible way of handling unknown and/or changing environments. Here, during runtime, sensors and reasoners can be asked to give necessary information. To cover the uncertainty in the real world, there is need for more than pure logical reasoning . In order to close this gap, we want to extend the CRAM Plan Language (CPL) so that it can ask probabilistic world models which is the best way to continue under the current conditions and beliefs. These world models should be ultimately learned and trained autonomously by the robot. For this the robot needs to log data in an appropriate way as well as use this data to infer a world model and train it. But it is not feasible to learn one model of the whole world. It looks much more promising to use a combined approach of defining and learning a hierarchical model of the world. To start with small parts during a plan execution and learning methods to increase the success rate for these parts can be a starting point. In this paper a first proof of concept of how such a model can automatically be found and learned is presented.