Leveraging Multimodal Haptic Sensory Data for Robust Cutting

Cutting is a common form of manipulation when working with divisible objects such as food, rope, or clay. Cooking in particular relies heavily on cutting to divide food items into desired shapes. However, cutting food is a challenging task due to the wide range of material properties exhibited by food items. Due to this variability, the same cutting motions cannot be used for all food items. Sensations from contact events, e.g., when placing the knife on the food item, will also vary depending on the material properties, and the robot will need to adapt accordingly. In this paper, we propose using vibrations and force-torque feedback from the interactions to adapt the slicing motions and monitor for contact events. The robot learns neural networks for performing each of these tasks and generalizing across different material properties. By adapting and monitoring the skill executions, the robot is able to reliably cut through more than 20 different types of food items and even detect whether certain food items are fresh or old.

[1]  Danica Kragic,et al.  Data-Driven Model Predictive Control for Food-Cutting , 2019, ArXiv.

[2]  Gaurav S. Sukhatme,et al.  Meta-level Priors for Learning Manipulation Skills with Sparse Features , 2016, ISER.

[3]  Christoph H. Lampert,et al.  Learning Dynamic Tactile Sensing With Robust Vision-Based Training , 2011, IEEE Transactions on Robotics.

[4]  Ross A. Knepper,et al.  DeepMPC: Learning Deep Latent Features for Model Predictive Control , 2015, Robotics: Science and Systems.

[5]  James M. Rehg,et al.  Haptic classification and recognition of objects using a tactile sensing forearm , 2012, 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[6]  Norman Hendrich,et al.  Making Sense of Audio Vibration for Liquid Height Estimation in Robotic Pouring , 2019, 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[7]  E. B. Newman,et al.  A Scale for the Measurement of the Psychological Magnitude Pitch , 1937 .

[8]  Colin Raffel,et al.  librosa: Audio and Music Signal Analysis in Python , 2015, SciPy.

[9]  Morgan Quigley,et al.  ROS: an open-source Robot Operating System , 2009, ICRA 2009.

[10]  S. Takamuku,et al.  Haptic discrimination of material properties by a robotic hand , 2007, 2007 IEEE 6th International Conference on Development and Learning.

[11]  J. Randall Flanagan,et al.  Coding and use of tactile signals from the fingertips in object manipulation tasks , 2009, Nature Reviews Neuroscience.

[12]  Wolfram Burgard,et al.  Learning the elasticity parameters of deformable objects with a manipulation robot , 2010, 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[13]  S. Schaal Dynamic Movement Primitives -A Framework for Motor Control in Humans and Humanoid Robotics , 2006 .

[14]  Ryan P. Adams,et al.  Bayesian Online Changepoint Detection , 2007, 0710.3742.

[15]  C. Harte,et al.  Detecting harmonic change in musical audio , 2006, AMCMM '06.

[16]  Abhinav Gupta,et al.  Supersizing self-supervision: Learning to grasp from 50K tries and 700 robot hours , 2015, 2016 IEEE International Conference on Robotics and Automation (ICRA).

[17]  Ashutosh Saxena,et al.  Learning haptic representation for manipulating deformable food objects , 2014, 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[18]  Sergey Levine,et al.  End-to-End Training of Deep Visuomotor Policies , 2015, J. Mach. Learn. Res..

[19]  Trevor Darrell,et al.  Robotic learning of haptic adjectives through physical interaction , 2015, Robotics Auton. Syst..

[20]  S. Sahin,et al.  Physical properties of foods , 2006 .

[21]  Lie Lu,et al.  Music type classification by spectral contrast feature , 2002, Proceedings. IEEE International Conference on Multimedia and Expo.

[22]  Gert Kootstra,et al.  Classification of rigid and deformable objects using a novel tactile sensor , 2011, 2011 15th International Conference on Advanced Robotics (ICAR).

[23]  Oliver Kroemer,et al.  Learning Audio Feedback for Estimating Amount and Flow of Granular Material , 2018, CoRL.

[24]  Beth Logan,et al.  Mel Frequency Cepstral Coefficients for Music Modeling , 2000, ISMIR.

[25]  Sergey Levine,et al.  Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection , 2016, Int. J. Robotics Res..

[26]  Gordon Cheng,et al.  Humanoids learn object properties from robust tactile feature descriptors via multi-modal artificial skin , 2014, 2014 IEEE-RAS International Conference on Humanoid Robots.