Selecting and Commanding Individual Robots in a Multi-Robot System

We present a novel real-time computer vision-based system for facilitating interactions between a single human and a multi-robot system: a user first selects an individual robot from a group of robots, by simply looking at it, and then commands the selected robot with a motion-based gesture. Robots estimate which robot the user is looking at by performing a distributed leader election based on the "score" of the detected frontal face.

[1]  Cynthia Breazeal,et al.  Designing sociable robots , 2002 .

[2]  Yoav Freund,et al.  A decision-theoretic generalization of on-line learning and an application to boosting , 1997, EuroCOLT.

[3]  Ernest J. H. Chang,et al.  An improved algorithm for decentralized extrema-finding in circular configurations of processes , 1979, CACM.

[4]  Greg Mori,et al.  Real-time Motion-based Gesture Recognition Using the GPU , 2009, MVA.

[5]  Marie L. Radford,et al.  Approach or Avoidance? The Role of Nonverbal Communication in the Academic Library User's Decision to Initiate a Reference Encounter , 1998, Libr. Trends.

[6]  A. Kendon Some functions of gaze-direction in social interaction. , 1967, Acta psychologica.

[7]  C. Kleinke Gaze and eye contact: a research review. , 1986, Psychological bulletin.

[8]  Tetsuo Ono,et al.  Robovie: Communication technologies for a social robot , 2006, Artificial Life and Robotics.

[9]  Eli Shechtman,et al.  Space-time behavior based correlation , 2005, 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05).

[10]  Hiroaki Kitano,et al.  RoboCup: A Challenge Problem for AI and Robotics , 1997, RoboCup.

[11]  Richard A. Bolt,et al.  A gaze-responsive self-disclosing display , 1990, CHI '90.

[12]  R. E. Kalman,et al.  A New Approach to Linear Filtering and Prediction Problems , 2002 .

[13]  David W. Payton,et al.  Pheromone Robotics , 2001, Auton. Robots.

[14]  Michael A. Goodrich,et al.  Human-Robot Interaction: A Survey , 2008, Found. Trends Hum. Comput. Interact..

[15]  Maja J. Matarić,et al.  Designing emergent behaviors: from local interactions to collective intelligence , 1993 .

[16]  Marjorie Skubic,et al.  Communicating with Teams of Cooperative Robots , 2002 .

[17]  W. Bux Token-ring local-area networks and their performance , 1989 .

[18]  Lynne E. Parker,et al.  Distributed Intelligence: Overview of the Field and Its Application in Multi-Robot Systems , 2008, AAAI Fall Symposium: Regarding the Intelligence in Distributed Intelligent Systems.

[19]  Berthold K. P. Horn,et al.  Determining Optical Flow , 1981, Other Conferences.

[20]  M. Snyder,et al.  Staring and Compliance: A Field Experiment on Hitchhiking , 1974 .

[21]  Adrian Hilton,et al.  A survey of advances in vision-based human motion capture and analysis , 2006, Comput. Vis. Image Underst..

[22]  Terry C. Lansdown,et al.  The mind's eye: cognitive and applied aspects of eye movement research , 2005 .

[23]  Javier Minguez,et al.  Nearness diagram (ND) navigation: collision avoidance in troublesome scenarios , 2004, IEEE Transactions on Robotics and Automation.

[24]  Nicolas Guéguen,et al.  Direct Look Versus Evasive Glance and Compliance With a Request , 2002, The Journal of social psychology.

[25]  Takeo Igarashi,et al.  Multi-touch interface for controlling multiple mobile robots , 2009, CHI Extended Abstracts.

[26]  Mubarak Shah,et al.  View-Invariant Representation and Recognition of Actions , 2002, International Journal of Computer Vision.

[27]  Takayuki Kanda,et al.  Nonverbal leakage in robots: Communication of intentions through seemingly unintentional behavior , 2009, 2009 4th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[28]  James McLurkin,et al.  Speaking Swarmish: Human-Robot Interface Design for Large Swarms of Autonomous Mobile Robots , 2006, AAAI Spring Symposium: To Boldly Go Where No Human-Robot Team Has Gone Before.

[29]  S. Rogers,et al.  Gaze behavior and affect at 6 months: predicting clinical outcomes and language development in typically developing infants and infants at risk for autism. , 2009, Developmental science.

[30]  Tomaso A. Poggio,et al.  A general framework for object detection , 1998, Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271).

[31]  Hirotaka Osawa,et al.  Anthropomorphization method using attachable humanoid parts , 2009, 2009 4th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[32]  Mark H. Johnson,et al.  Mechanisms of Eye Gaze Perception during Infancy , 2004, Journal of Cognitive Neuroscience.

[33]  Sebastian Thrun,et al.  Robotic mapping: a survey , 2003 .

[34]  Agnieszka Bojko,et al.  Using eye tracking to compare web page designs: a case study , 2006 .

[35]  Sonia Chernova,et al.  Mobile human-robot teaming with environmental tolerance , 2009, 2009 4th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[36]  Yoram Singer,et al.  Improved Boosting Algorithms Using Confidence-rated Predictions , 1998, COLT' 98.

[37]  Shuichi Nishio,et al.  Building artificial humans to understand humans , 2007, Journal of artificial organs : the official journal of the Japanese Society for Artificial Organs.

[38]  Päivi Majaranta,et al.  Twenty years of eye typing: systems and design issues , 2002, ETRA.

[39]  A. Meltzoff,et al.  The development of gaze following and its relation to language. , 2005, Developmental science.

[40]  Bilge Mutlu,et al.  Robots in organizations: The role of workflow, social, and environmental factors in human-robot interaction , 2008, 2008 3rd ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[41]  Nils J. Nilsson,et al.  Shakey the Robot , 1984 .

[42]  Paolo Remagnino,et al.  Importance of Vision in Human-Robot Communication Understanding Speech Using Robot Vision and Demonstrating Proper Actions to Human Vision , 2009 .

[43]  M Angelborg-Thanderz,et al.  Information complexity--mental workload and performance in combat aircraft. , 1997, Ergonomics.

[44]  Kentaro Ishii,et al.  Magic cards: a paper tag interface for implicit robot control , 2009, CHI.

[45]  D. Robinson,et al.  A METHOD OF MEASURING EYE MOVEMENT USING A SCLERAL SEARCH COIL IN A MAGNETIC FIELD. , 1963, IEEE transactions on bio-medical engineering.

[46]  Robin R. Murphy,et al.  Human-robot interactions during the robot-assisted urban search and rescue response at the World Trade Center , 2003, IEEE Trans. Syst. Man Cybern. Part B.

[47]  Carlos Hitoshi Morimoto,et al.  Eye gaze tracking techniques for interactive applications , 2005, Comput. Vis. Image Underst..

[48]  T. Kanda,et al.  Robot mediated round table: Analysis of the effect of robot's gaze , 2002, Proceedings. 11th IEEE International Workshop on Robot and Human Interactive Communication.

[49]  Narendra Ahuja,et al.  Face Detection and Gesture Recognition for Human-Computer Interaction , 2001, The International Series in Video Computing.

[50]  Dan R. Olsen,et al.  Fan-out: measuring human control of multiple robots , 2004, CHI.

[51]  Douglas Crockford,et al.  The application/json Media Type for JavaScript Object Notation (JSON) , 2006, RFC.

[52]  Anders Green,et al.  Involving users in the design of a mobile office robot , 2004, IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews).

[53]  Shumin Zhai,et al.  Manual and gaze input cascaded (MAGIC) pointing , 1999, CHI '99.

[54]  Gary R. Bradski,et al.  Learning OpenCV - computer vision with the OpenCV library: software that sees , 2008 .

[55]  Andrew N Meltzoff,et al.  Infant gaze following and pointing predict accelerated vocabulary growth through two years of age: a longitudinal, growth curve modeling study* , 2008, Journal of Child Language.

[56]  R. Dillmann,et al.  Using gesture and speech control for commanding a robot assistant , 2002, Proceedings. 11th IEEE International Workshop on Robot and Human Interactive Communication.

[57]  Takayuki Kanda,et al.  Footing in human-robot conversations: How robots might shape participant roles using gaze cues , 2009, 2009 4th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[58]  Horst-Michael Groß,et al.  Estimation of Pointing Poses for Visual Instructing Mobile Robots under Real World Conditions , 2010, EMCR.

[59]  Matthew W. Crocker,et al.  Visual attention in spoken human-robot interaction , 2009, 2009 4th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[60]  Jochen Triesch,et al.  GripSee: A Gesture-Controlled Robot for Object Perception and Manipulation , 1999, Auton. Robots.

[61]  Lino Marques,et al.  Multisensor Demining Robot , 2005, Auton. Robots.

[62]  Takanori Shibata,et al.  Effects of robot-assisted activity for elderly people and nurses at a day service center , 2004, Proceedings of the IEEE.

[63]  Stéphane Mallat,et al.  A Theory for Multiresolution Signal Decomposition: The Wavelet Representation , 1989, IEEE Trans. Pattern Anal. Mach. Intell..

[64]  David Chapman,et al.  What are plans for? , 1990, Robotics Auton. Syst..

[65]  Robert J. K. Jacob,et al.  Eye tracking in human-computer interaction and usability research : Ready to deliver the promises , 2002 .

[66]  S. Mitra,et al.  Gesture Recognition: A Survey , 2007, IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews).

[67]  Hirotake Yamazoe,et al.  Remote gaze estimation with a single camera based on facial-feature tracking without special calibration actions , 2008, ETRA.

[68]  Tetsuo Ono,et al.  Robovie: an interactive humanoid robot , 2001 .

[69]  Marek P. Michalowski,et al.  Keepon : A Playful Robot for Research, Therapy, and Entertainment (Original Paper) , 2009 .

[70]  Wolfram Burgard,et al.  Probabilistic Robotics (Intelligent Robotics and Autonomous Agents) , 2005 .

[71]  Brian Scassellati,et al.  Investigating models of social development using a humanoid robot , 2003, Proceedings of the International Joint Conference on Neural Networks, 2003..

[72]  Björn Stenger,et al.  A Real-Time Hand Gesture Interface Implemented on a Multi-Core Processor , 2007, MVA.

[73]  C. Teuscher,et al.  Gaze following: why (not) learn it? , 2006, Developmental science.

[74]  Christopher D. Manning,et al.  Introduction to Information Retrieval , 2010, J. Assoc. Inf. Sci. Technol..

[75]  Paul A. Viola,et al.  Robust Real-Time Face Detection , 2001, Proceedings Eighth IEEE International Conference on Computer Vision. ICCV 2001.

[76]  A. Knoll,et al.  The Opportunity Rover's Athena Science Investigation at Meridiani Planum, Mars , 2004, Science.

[77]  Gerald L. Lohse,et al.  Consumer Eye Movement Patterns on Yellow Pages Advertising , 1997 .

[78]  James W. Davis,et al.  The Recognition of Human Movement Using Temporal Templates , 2001, IEEE Trans. Pattern Anal. Mach. Intell..

[79]  Joseph H. Goldberg,et al.  Computer interface evaluation using eye movements: methods and constructs , 1999 .

[80]  Magdalena D. Bugajska,et al.  Building a Multimodal Human-Robot Interface , 2001, IEEE Intell. Syst..

[81]  Mark H. Johnson,et al.  Eye contact detection in humans from birth , 2002, Proceedings of the National Academy of Sciences of the United States of America.

[82]  Jitendra Malik,et al.  Recognizing action at a distance , 2003, Proceedings Ninth IEEE International Conference on Computer Vision.

[83]  Hiroaki Kitano,et al.  Development of an Autonomous Quadruped Robot for Robot Entertainment , 1998, Auton. Robots.

[84]  Hirotake Yamazoe,et al.  Evaluating crossmodal awareness of daily-partner robot to user's behaviors with gaze and utterance detection , 2009, CASEMANS@Pervasive.

[85]  Greg Mori,et al.  Action recognition by learning mid-level motion features , 2008, 2008 IEEE Conference on Computer Vision and Pattern Recognition.

[86]  Susan R. Fussell,et al.  How people anthropomorphize robots , 2008, 2008 3rd ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[87]  Sebastian Thrun,et al.  A Gesture Based Interface for Human-Robot Interaction , 2000, Auton. Robots.

[88]  Richard Vaughan,et al.  Massively multi-robot simulation in stage , 2008, Swarm Intelligence.

[89]  S. Baron-Cohen,et al.  Is there an innate gaze module? Evidence from human neonates , 2000 .

[90]  Shumeet Baluja,et al.  Non-Intrusive Gaze Tracking Using Artificial Neural Networks , 1993, NIPS.

[91]  Greg Mori,et al.  Selecting and commanding individual robots in a vision-based multi-robot system , 2010, 2010 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[92]  Korten Kamp,et al.  Recognizing and interpreting gestures on a mobile robot , 1996, AAAI 1996.

[93]  Frank J. Ferrin Survey of helmet tracking technologies , 1991, Medical Imaging.

[94]  Kevin Skadron,et al.  Scalable parallel programming , 2008, 2008 IEEE Hot Chips 20 Symposium (HCS).

[95]  Ruigang Yang,et al.  Eye gaze correction with stereovision for video-teleconferencing , 2002, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[96]  Andrew T Duchowski,et al.  A breadth-first survey of eye-tracking applications , 2002, Behavior research methods, instruments, & computers : a journal of the Psychonomic Society, Inc.

[97]  Hirokazu Kato,et al.  Marker tracking and HMD calibration for a video-based augmented reality conferencing system , 1999, Proceedings 2nd IEEE and ACM International Workshop on Augmented Reality (IWAR'99).

[98]  Andry Tanoto,et al.  Analysis and design of human-robot swarm interaction in firefighting , 2008, RO-MAN 2008 - The 17th IEEE International Symposium on Robot and Human Interactive Communication.

[99]  E. Goffman Behavior in public places : notes on the social organization of gatherings , 1964 .

[100]  Brian R. Duffy,et al.  Anthropomorphism and the social robot , 2003, Robotics Auton. Syst..