A face vector - the point instruction-type interface for manipulation of an extended body in dual-task situations

A human’s work efficiency can be improved in many situations if performing two different tasks simultaneously is possible. In recent years, various robotic extended bodies and its control interfaces have been proposed for assisting manual work. Most of these interfaces are path control-type operation and have been verified under single-task situations; however, their usability with respect to voluntariness and intuitiveness in a dual-task situation has not been addressed. In this paper, we focus on point instruction-type operation that can be used together with conventional path control-type operation in order to improve the control intuitiveness, and propose a new interface “face vector” which can point at 3-dimensional target position in non-task-oriented environment. The voluntariness and intuitiveness of proposed modality is discussed by measuring its pointing accuracy and effect on work efficiency in dual-task conditions. The results show that numerical data of pointing accuracy of face vector in single and dual task conditions, and indicates that proposed interface is enough intuitive not to reduce the work efficiency.

[1]  Chunshu Li,et al.  The design of an obstacle avoiding trajectory in unknown environment using potential fields , 2010, The 2010 IEEE International Conference on Information and Automation.

[2]  Yoshio Matsumoto,et al.  Development of evaluation indexes for assistive robots based on ICF , 2012, 2012 IEEE RO-MAN: The 21st IEEE International Symposium on Robot and Human Interactive Communication.

[3]  H. Harry Asada,et al.  Supernumerary Robotic Limbs for aircraft fuselage assembly: Body stabilization and guidance by bracing , 2014, 2014 IEEE International Conference on Robotics and Automation (ICRA).

[4]  Enzo Pasquale Scilingo,et al.  On the tridimensional estimation of the gaze point by a stereoscopic wearable eye tracker , 2015, 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC).

[5]  Joan Lobo-Prat,et al.  Non-invasive control interfaces for intention detection in active movement-assistive devices , 2014, Journal of NeuroEngineering and Rehabilitation.

[6]  H. Harry Asada,et al.  Independent, voluntary control of extra robotic limbs , 2017, 2017 IEEE International Conference on Robotics and Automation (ICRA).

[7]  Jeremy N. Bailenson,et al.  Evaluating Control Schemes for the Third Arm of an Avatar , 2016, PRESENCE: Teleoperators and Virtual Environments.

[8]  Masahiko Inami,et al.  MetaLimbs: multiple arms interaction metamorphism , 2017, SIGGRAPH Emerging Technologies.

[9]  Federico Parietti,et al.  Demonstration-based control of supernumerary robotic limbs , 2012, 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[10]  Hiroyasu Iwata,et al.  Research for a human-manipulative Third Arm: - Experiment of object instruction by Face Vector in virtual reality-@@@―第一報:顔面ベクトルによる目標物指示性のVR内における検証― , 2016 .

[11]  Marion Gebhard,et al.  A novel head gesture based interface for hands-free control of a robot , 2016, 2016 IEEE International Symposium on Medical Measurements and Applications (MeMeA).