Action Discretization for Robot Arm Teleoperation in Open-Die Forging

Action extraction from teleoperated robots can be a crucial step in the direction of full -or shared- autonomy of tasks where human experience is indispensable. This is especially important in tasks that seek dynamic goals, where a human operator needs more control on how the machine behaves to provide assistance or perform a task. Open-die forging is a basic metal-forming process that lacks non-destructive product quality measures. Human experience is therefore imperative. During the process, a robot-arm is operated to place the work-piece between the dies of the forge where it is striked several times to reach a specific geometry. In this paper, we apply a white-box computer vision technique to discretize open-die forging robot-arm teleoperation data into actions as a step in learning the operator’s behavior.

[1]  Gang Wang,et al.  NTU RGB+D: A Large Scale Dataset for 3D Human Activity Analysis , 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[2]  Bruno Buchmayr,et al.  Schnelle Prozessmodellierung, Online-Visualisierung und Optimierung beim Freiformschmieden , 2018 .

[3]  Dirk Rosenstock,et al.  New approach for the optimization of pass-schedules in open-die forging , 2019 .

[4]  Shih-Fu Chang,et al.  Temporal Action Localization in Untrimmed Videos via Multi-stage CNNs , 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[5]  Radford M. Neal Pattern Recognition and Machine Learning , 2007, Technometrics.

[6]  Jürgen Schmidhuber,et al.  Long Short-Term Memory , 1997, Neural Computation.

[7]  Sergey Ioffe,et al.  Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift , 2015, ICML.

[8]  Zhi-Hua Zhou,et al.  ML-KNN: A lazy learning approach to multi-label learning , 2007, Pattern Recognit..

[9]  Taylan Altan,et al.  Cold And Hot Forging: Fundamentals And Applications , 2004 .

[10]  G. Hirt,et al.  Entwicklung von schnellen Prozessmodellen und Optimierungsmöglichkeiten für das Freiformschmieden , 2014 .

[11]  Gregory D. Hager,et al.  Transition state clustering: Unsupervised surgical trajectory segmentation for robot learning , 2017, ISRR.

[12]  Wanqing Li,et al.  Action recognition based on a bag of 3D points , 2010, 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Workshops.

[13]  Juan Carlos Niebles,et al.  A Hierarchical Model of Shape and Appearance for Human Action Classification , 2007, 2007 IEEE Conference on Computer Vision and Pattern Recognition.

[14]  D. Tax,et al.  Feature scaling in support vector data description , 2002 .

[15]  Sanghoon Lee,et al.  Ensemble Deep Learning for Skeleton-Based Action Recognition Using Temporal Sliding LSTM Networks , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).

[16]  Sunita Sarawagi,et al.  Discriminative Methods for Multi-labeled Classification , 2004, PAKDD.

[17]  Silvio Savarese,et al.  ROBOTURK: A Crowdsourcing Platform for Robotic Skill Learning through Imitation , 2018, CoRL.

[18]  Gang Wang,et al.  Skeleton-Based Action Recognition Using Spatio-Temporal LSTM Network with Trust Gates , 2017, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[19]  John F. Canny,et al.  A Computational Approach to Edge Detection , 1986, IEEE Transactions on Pattern Analysis and Machine Intelligence.