Robot learning schemes that trade motion accuracy for command simplification

This study was inspired by the human motor control system in its ability to accommodate a wide variety of motions. By contrast, the biologically inspired robot learning controller usually encounters huge learning space problems in many practical applications. A hypothesis for the superiority of the human motor control system is that it may have simplified the motion command at the expense of motion accuracy. This tradeoff provides an insight into how fast and simple control can be achieved when a robot task does not demand high accuracy. Two motion command simplification schemes are proposed in this paper based on the equilibrium-point hypothesis for human motion control. Investigation into the tradeoff between motion accuracy and command simplification reported in this paper was conducted using robot manipulators to generate signatures. Signature generation involves fast handwriting, and handwriting is a human skill acquired via practice. Because humans learn how to sign their names after they learn how to write, in the second learning process, they somehow learn to trade motion accuracy for motion speed and command simplicity, since signatures are simplified forms of original handwriting. Experiments are reported that demonstrate the effectiveness of the proposed schemes.

[1]  Réjean Plamondon,et al.  An evaluation of motor models of handwriting , 1989, IEEE Trans. Syst. Man Cybern..

[2]  Phillip J. McKerrow,et al.  Introduction to robotics , 1991 .

[3]  E. Bizzi,et al.  Characteristics of motor programs underlying arm movements in monkeys. , 1979, Journal of neurophysiology.

[4]  Suguru Arimoto,et al.  A New Feedback Method for Dynamic Control of Manipulators , 1981 .

[5]  Terence D. Sanger,et al.  Neural network learning control of robot manipulators using gradually increasing task difficulty , 1994, IEEE Trans. Robotics Autom..

[6]  Steven L. Lehman,et al.  Input Identification Depends on Model Complexity , 1990 .

[7]  C.-H. Wu,et al.  Voluntary movements for robotic control , 1992, IEEE Control Systems.

[8]  C. S. George Lee,et al.  Reinforcement structure/parameter learning for neural-network-based fuzzy logic control systems , 1994, IEEE Trans. Fuzzy Syst..

[9]  Timothy D. Lee,et al.  Motor Control and Learning: A Behavioral Emphasis , 1982 .

[10]  Kuu-Young Young,et al.  An approach to enlarge learning space coverage for robot learning control , 1997, IEEE Trans. Fuzzy Syst..

[11]  H. Harry Asada,et al.  Progressive learning and its application to robot impedance learning , 1996, IEEE Trans. Neural Networks.

[12]  D M Corcos,et al.  Organizing principles for single-joint movements. I. A speed-insensitive strategy. , 1989, Journal of neurophysiology.

[13]  Hamid R. Berenji,et al.  Learning and tuning fuzzy logic controllers through reinforcements , 1992, IEEE Trans. Neural Networks.

[14]  Toshio Fukuda,et al.  Hierarchical intelligent control for robotic motion , 1994, IEEE Trans. Neural Networks.

[15]  J. Hollerbach Dynamic Scaling of Manipulator Trajectories , 1983, 1983 American Control Conference.

[16]  Kuu-Young Young,et al.  An approach to simplify the learning space for robot learning control , 1998, Fuzzy Sets Syst..

[17]  D. Corcos,et al.  Organizing principles for single-joint movements. II. A speed-sensitive strategy. , 1989, Journal of neurophysiology.