A prototyping environment for interaction between a human and a robotic multi-agent system

In this paper we describe our prototyping environment to study concepts for empowering a single user to control robotic multi-agent systems. We investigate and validate these concepts by experiments with a fleet of hovering robots. Specifically, we report on a first experiment in which one robot is equipped with an RGB-D sensor through which the user is enabled to directly interact with a multi-agent system without the need to carry any device.

[1]  Greg Mori,et al.  Selecting and commanding groups in a multi-robot vision based system , 2011, 2011 6th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[2]  Mary L. Cummings,et al.  TASK-BASED INTERFACES FOR DECENTRALIZED MULTIPLE UNMANNED VEHICLE CONTROL , 2011 .

[3]  Greg Mori,et al.  Selecting and commanding individual robots in a vision-based multi-robot system , 2010, 2010 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[4]  Michael Lichtenstern,et al.  IMU- and GNSS-Assisted Single-User Control of a MAV-Swarm for Multiple Perspective Observation of Outdoor Activities , 2011 .

[5]  Mac Schwager,et al.  A scalable information theoretic approach to distributed robot coordination , 2011, 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[6]  Dan R. Olsen,et al.  Fan-out: measuring human control of multiple robots , 2004, CHI.