A Scalable Architecture to Design Multi-modal Interactions for Qualitative Robot Navigation

The paper discusses an approach for teleoperating a mobile robot based on qualitative spatial relations, which are instructed through speech-based and deictic commands. Given a workspace containing a robot, a user and some objects, we exploit fuzzy reasoning criteria to describe the pertinence map between the locations in the workspace and qualitative commands incrementally acquired. We discuss the modularity features of the used reasoning technique through some use cases addressing a conjunction of spatial kernels. In particular, we address the problem of finding a suitable target location from a set of qualitative spatial relations based on symbolic reasoning and Monte Carlo simulations. Our architecture is analyzed in a scenario considering simple kernels and an almost-perfect perception of the environment. Nevertheless, the presented approach is modular and scalable, and it could be also exploited to design application where multi-modal qualitative interactions are considered.

[1]  Éric Anquetil,et al.  Learning of fuzzy spatial relations between handwritten patterns , 2014, Int. J. Data Min. Model. Manag..

[2]  Alessandro Saffiotti,et al.  Towards Sliding Autonomy in Mobile Robotic Telepresence : A Position Paper , 2017, ECCE 2017.

[3]  Tony J. Prescott,et al.  MiRo: An Animal-like Companion Robot with a Biomimetic Brain-based Control System , 2017, HRI.

[4]  Dan R. Olsen,et al.  Metrics for Evaluating Human-Robot Interactions , 2003 .

[5]  Isabelle Bloch,et al.  Why robots should use fuzzy mathematical morphology , 2002 .

[6]  Angelo Cangelosi,et al.  Communication with Speech and Gestures: Applications of Recurrent Neural Networks to Robot Language Learning , 2017 .

[7]  Honghai Liu,et al.  Grounding spatial relations in natural language by fuzzy representation for human-robot interaction , 2014, 2014 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE).

[8]  Silvia Rossi,et al.  A dialogue system for multimodal human-robot interaction , 2013, ICMI '13.

[9]  Isabelle Bloch,et al.  Fuzzy Relative Position Between Objects in Image Processing: A Morphological Approach , 1999, IEEE Trans. Pattern Anal. Mach. Intell..

[10]  Alberto Poncela,et al.  Command-based voice teleoperation of a mobile robot via a human-robot interface , 2014, Robotica.

[11]  John A. Bateman,et al.  Towards Dialogue Based Shared Control of Navigating Robots , 2004, Spatial Cognition.

[12]  Fulvio Mastrogiovanni,et al.  An Open Framework to Develop and Validate Techniques for Speech Analysis , 2016, AIRO@AI*IA.

[13]  Bilge Mutlu,et al.  Learning-Based Modeling of Multimodal Behaviors for Humanlike Robots , 2014, 2014 9th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[14]  Stefan Wermter,et al.  Multi-modal integration of dynamic audiovisual patterns for an interactive reinforcement learning scenario , 2016, 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[15]  M. A. Viraj J. Muthugala,et al.  Deictic gesture enhanced fuzzy spatial relation grounding in natural language , 2017, 2017 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE).