Temporal-Difference Learning to Assist Human Decision Making during the Control of an Artificial Limb

In this work we explore the use of reinforcement learning (RL) to help with human decision making, combining stateof-the-art RL algorithms with an application to prosthetics. Managing human-machine interaction is a problem of considerable scope, and the simplification of human-robot interfaces is especially important in the domains of biomedical technology and rehabilitation medicine. For example, amputees who control artificial limbs are often required to quickly switch between a number of control actions or modes of operation in order to operate their devices. We suggest that by learning to anticipate (predict) a user’s behaviour, artificial limbs could take on an active role in a human’s control decisions so as to reduce the burden on their users. Recently, we showed that RL in the form of general value functions (GVFs) could be used to accurately detect a user’s control intent prior to their explicit control choices. In the present work, we explore the use of temporal-difference learning and GVFs to predict when users will switch their control influence between the different motor functions of a robot arm. Experiments were performed using a multi-function robot arm that was controlled by muscle signals from a user’s body (similar to conventional artificial limb control). Our approach was able to acquire and maintain forecasts about a user’s switching decisions in real time. It also provides an intuitive and reward-free way for users to correct or reinforce the decisions made by the machine learning system. We expect that when a system is certain enough about its predictions, it can begin to take over switching decisions from the user to streamline control and potentially decrease the time and effort needed to complete tasks. This preliminary study therefore suggests a way to naturally integrate human- and machine-based decision making systems.

[1]  Zoubin Ghahramani,et al.  Perspectives and problems in motor learning , 2001, Trends in Cognitive Sciences.

[2]  R. Johansson,et al.  Prediction Precedes Control in Motor Learning , 2003, Current Biology.

[3]  Robert Platt,et al.  Extracting User Intent in Mixed Initiative Teleoperator Control , 2004 .

[4]  Longxin Lin Self-Improving Reactive Agents Based on Reinforcement Learning, Planning and Teaching , 2004, Machine Learning.

[5]  Andrea Lockerd Thomaz,et al.  Teachable robots: Understanding human teaching behavior to build more effective robot learners , 2008, Artif. Intell..

[6]  S Micera,et al.  Control of Hand Prostheses Using Peripheral Information , 2010, IEEE Reviews in Biomedical Engineering.

[7]  Patrick M. Pilarski,et al.  Horde: a scalable real-time architecture for learning knowledge from unsupervised sensorimotor interaction , 2011, AAMAS.

[8]  T Walley Williams,et al.  Progress on stabilizing and controlling powered upper-limb prostheses. , 2011, Journal of rehabilitation research and development.

[9]  R. S. Sutton,et al.  Dynamic switching and real-time machine learning for improved human control of assistive biomedical robots , 2012, 2012 4th IEEE RAS & EMBS International Conference on Biomedical Robotics and Biomechatronics (BioRob).

[10]  Patrick M. Pilarski,et al.  Between Instruction and Reward: Human-Prompted Switching , 2012, AAAI Fall Symposium: Robots Learning Interactively from Human Teachers.

[11]  Patrick M. Pilarski,et al.  Adaptive artificial limbs: a real-time approach to prediction and anticipation , 2013, IEEE Robotics & Automation Magazine.

[12]  Richard S. Sutton,et al.  Multi-timescale nexting in a reinforcement learning robot , 2011, Adapt. Behav..