Autonomous flying blimp interaction with human in an indoor space

We present the Georgia Tech Miniature Autonomous Blimp (GT-MAB), which is designed to support human-robot interaction experiments in an indoor space for up to two hours. GT-MAB is safe while flying in close proximity to humans. It is able to detect the face of a human subject, follow the human, and recognize hand gestures. GT-MAB employs a deep neural network based on the single shot multibox detector to jointly detect a human user’s face and hands in a real-time video stream collected by the onboard camera. A human-robot interaction procedure is designed and tested with various human users. The learning algorithms recognize two hand waving gestures. The human user does not need to wear any additional tracking device when interacting with the flying blimp. Vision-based feedback controllers are designed to control the blimp to follow the human and fly in one of two distinguishable patterns in response to each of the two hand gestures. The blimp communicates its intentions to the human user by displaying visual symbols. The collected experimental data show that the visual feedback from the blimp in reaction to the human user significantly improves the interactive experience between blimp and human. The demonstrated success of this procedure indicates that GT-MAB could serve as a flying robot that is able to collect human data safely in an indoor environment.

[1]  Fumin Zhang,et al.  Autopilot design for A class of miniature autonomous blimps , 2017, 2017 IEEE Conference on Control Technology and Applications (CCTA).

[2]  Albrecht Schmidt,et al.  Midair Displays: Concept and First Experiences with Free-Floating Pervasive Displays , 2014, PerDis.

[3]  James A. Landay,et al.  Emotion encoding in Human-Drone Interaction , 2016, 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[4]  Martin Frassl,et al.  A prototyping environment for interaction between a human and a robotic multi-agent system , 2012, 2012 7th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[5]  Bilge Mutlu,et al.  Communicating Directionality in Flying Robots , 2015, 2015 10th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[6]  Greg Mori,et al.  HRI in the sky: Creating and commanding teams of UAVs with a vision-mediated gestural interface , 2013, 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[7]  Paul A. Viola,et al.  Robust Real-Time Face Detection , 2001, Proceedings Eighth IEEE International Conference on Computer Vision. ICCV 2001.

[8]  I. Scott MacKenzie,et al.  The use of gaze to control drones , 2014, ETRA.

[9]  Daniel Cremers,et al.  FollowMe: Person following and gesture recognition with a quadrocopter , 2013, 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[10]  Bilge Mutlu,et al.  Communication of Intent in Assistive Free Flyers , 2014, 2014 9th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[11]  Paul A. Beardsley,et al.  Design and control of a spherical omnidirectional blimp , 2013, 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[12]  Takehisa Yairi,et al.  Quadrotor or blimp? Noise and appearance considerations in designing social aerial robot , 2013, 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[13]  Brian R. Duffy,et al.  Anthropomorphism and the social robot , 2003, Robotics Auton. Syst..

[14]  Jun Ota,et al.  Design of Face Tracking System Using Fixed 360-Degree Cameras and Flying Blimp for Health Care Evaluation , 2016 .

[15]  Helbing,et al.  Social force model for pedestrian dynamics. , 1995, Physical review. E, Statistical physics, plasmas, fluids, and related interdisciplinary topics.

[16]  James Everett Young,et al.  Communicating affect via flight path Exploring use of the Laban Effort System for designing affective locomotion paths , 2013, 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[17]  Paul Geladi,et al.  Principal Component Analysis , 1987, Comprehensive Chemometrics.

[18]  Andrew Zisserman,et al.  Hand detection using multiple proposals , 2011, BMVC.

[19]  Fumin Zhang,et al.  Parameter Identification of Blimp Dynamics through Swinging Motion , 2018, 2018 15th International Conference on Control, Automation, Robotics and Vision (ICARCV).

[20]  Gregory Dudek,et al.  Control, localization and human interaction with an autonomous lighter-than-air performer , 2017, Robotics Auton. Syst..

[21]  Yee Wei Law,et al.  Human Pose and Path Estimation from Aerial Video Using Dynamic Classifier Selection , 2018, Cognitive Computation.

[22]  Mark Whitty,et al.  Robotics, Vision and Control. Fundamental Algorithms in MATLAB , 2012 .

[23]  Valiallah Monajjemi,et al.  UAV, come to me: End-to-end, multi-scale situated HRI with an uninstrumented human and a distant UAV , 2016, 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[24]  Bonifaz Kaufmann,et al.  Natural Interaction Techniques for an Unmanned Aerial Vehicle System , 2017, IEEE Pervasive Computing.

[25]  Ningshi Yao,et al.  Monocular vision-based human following on miniature robotic blimp , 2017, 2017 IEEE International Conference on Robotics and Automation (ICRA).

[26]  Brittany A. Duncan,et al.  Investigation of human-robot comfort with a small Unmanned Aerial Vehicle compared to a ground robot , 2017, 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[27]  Luca Maria Gambardella,et al.  Human Control of UAVs using Face Pose Estimates and Hand Gestures , 2014, 2014 9th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[28]  Greg Mori,et al.  “You are Green”: a Touch-to-Name Interaction in an Integrated Multi-Modal Multi-Robot HRI System , 2014, 2014 9th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[29]  E. Hall,et al.  The Hidden Dimension , 1970 .

[30]  Zhaohui Wu,et al.  FlyingBuddy: augment human mobility and perceptibility , 2011, UbiComp '11.

[31]  Robin R. Murphy,et al.  Comfortable approach distance with small Unmanned Aerial Vehicles , 2013, 2013 IEEE RO-MAN.

[32]  Sudipta N. Sinha,et al.  Monocular Localization of a moving person onboard a Quadrotor MAV , 2015, 2015 IEEE International Conference on Robotics and Automation (ICRA).

[33]  Paolo Valigi,et al.  Personalizing vision-based gestural interfaces for HRI with UAVs: a transfer learning approach , 2014, 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[34]  Heath A. Ruff,et al.  Manual Versus Speech Input for Unmanned Aerial Vehicle Control Station Operations , 2003 .

[35]  Wei Liu,et al.  SSD: Single Shot MultiBox Detector , 2015, ECCV.

[36]  Michael A. Goodrich,et al.  Human-Robot Interaction: A Survey , 2008, Found. Trends Hum. Comput. Interact..

[37]  Francesca De Crescenzio,et al.  A First Implementation of an Advanced 3D Interface to Control and Supervise UAV (Uninhabited Aerial Vehicles) Missions , 2009, PRESENCE: Teleoperators and Virtual Environments.

[38]  Cesar Lucho,et al.  Daedalus: A sUAV for Human-Robot Interaction , 2014, 2014 9th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[39]  Florian Mueller,et al.  Joggobot: a flying robot as jogging companion , 2012, CHI Extended Abstracts.