Interactive Task Learning with Discrete and Continuous Features

Learning tasks from demonstration is key to the flexibility of robots and their accessibility to non-programmers. We present a task learning framework that combines the strengths of discrete and continuous representations. The robot learns a set of criteria and expectations to represent the goal of a demonstrated task. The task consists of performing actions that fulfill expectations on objects that meet the criteria. We propose modeling continuous criteria and expectations with Gaussian distributions. To deal with simultaneous demonstration of multiple tasks, we assume that expectations can be multi-modal and model them as mixtures of Gaussians. We present an implementation of this framework on the robot