Object Grasping using Convolutional Neural Networks

We have described learning based approach for a robotic arm to grasp an object or clear the clutter kept in front of it. We have used AlexNet pre-trained ImageNet for our CNN (Convolutional neural network). We continuously feed the image to our Deep neural network and this deep neural network identifies the objects and gives out an output in the form of grasp angle and the coordinates of the object that is to be picked. These coordinates are fed to the robotic arm to do the action. Thus, implementing hand eye coordination. Our experimental results demonstrate that the robotic arm is able to grasp novel objects successfully. We got an accuracy of 70% for previously known objects and 64% accuracy for novel objects.

[1]  Van-Duc Nguyen,et al.  Constructing force-closure grasps , 1986, Proceedings. 1986 IEEE International Conference on Robotics and Automation.

[2]  Danica Kragic,et al.  Data-Driven Grasp Synthesis—A Survey , 2013, IEEE Transactions on Robotics.

[3]  Vijay Kumar,et al.  Robotic grasping and contact: a review , 2000, Proceedings 2000 ICRA. Millennium Conference. IEEE International Conference on Robotics and Automation. Symposia Proceedings (Cat. No.00CH37065).

[4]  Karun B. Shimoga,et al.  Robot Grasp Synthesis Algorithms: A Survey , 1996, Int. J. Robotics Res..

[5]  Geoffrey E. Hinton,et al.  ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.

[6]  Tomás Lozano-Pérez,et al.  Task-level planning of pick-and-place robot motions , 1989, Computer.

[7]  R. Brooks Planning Collision- Free Motions for Pick-and-Place Operations , 1983 .

[8]  Lawrence D. Jackel,et al.  Handwritten Digit Recognition with a Back-Propagation Network , 1989, NIPS.