Motion strategies for autonomous observers

This dissertation introduces algorithms to generate motion strategies for a new class of autonomous agents called Autonomous Observers, and describes integrated systems in which these algorithms are embedded. An autonomous observer (AO) is a physical agent performing high-level vision-oriented operations, such as tracking moving targets or building maps of environments. What distinguishes an AO from other autonomous agents is the use of its sensors as end-effectors. In traditional robotics, sensing is a means to an end—e.g., sonars are used for collision avoidance in robot navigation, cameras are used to recognize individual parts in assembly tasks, and proximity sensors enhance grasping operations. In contrast, for an AO information gathering is the goal. Building models of objects and/or environments, detecting faults in large structures, tracking moving targets, or performing surveillance operations are all examples of tasks for AO systems. One characteristic requirement of all AO systems is the need to satisfy geometric visibility constraints while planning and executing motions. Although similar or related problems have been studied in other contexts, the material presented in this thesis focuses on the fundamental motion planning problem rather than on pure imaging or sensing issues: Which locations must be visited by a robot to efficiently map a building? How must a robot proceed in order to explore an environment? What motions will keep a target in view despite the presence of occluding obstacles? How can we reduce the number of sensing operations? To answer these questions, this thesis proposes a collection of methods, including randomized art-gallery and next-best-view algorithms. These algorithms have been integrated into two working robot prototypes, which are also described in this dissertation.