Using Style-Transfer to Understand Material Classification for Robotic Sorting of Recycled Beverage Containers

Robotic sorting machines are increasingly being investigated for use in recycling centers. We consider the problem of automatically classifying images of recycled beverage containers by material type, i.e. glass, plastic, metal or liquid-packaging-board, when the containers are not in their original condition, meaning their shape and size may be deformed, and coloring and packaging labels may be damaged or dirty. We describe a retrofitted computer vision system and deep convolutional neural network classifier designed for this purpose, that enabled a sorting machine's accuracy and speed to reach commercially viable benchmarks. We investigate what was more important for highly accurate container material recognition: shape, size, color, texture or all of these? To help answer this question, we made use of style-transfer methods from the field of deep learning. We found that removing either texture or shape cues significantly reduced the accuracy in container material classification, while removing color had a minor negative effect. Unlike recent work on generic objects in ImageNet, networks trained to classify by container material type learned better from object shape than texture. Our findings show that commercial sorting of recycled beverage containers by material type at high accuracy is feasible, even when the containers are in poor condition. Furthermore, we reinforce the recent finding that convolutional neural networks can learn predominantly either from texture cues or shape.

[1]  Jian Sun,et al.  Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[2]  Mary M. Conte,et al.  Textures as Probes of Visual Processing. , 2017, Annual review of vision science.

[3]  Yi Jin,et al.  Ensemble feature learning for material recognition with convolutional neural networks , 2018, EURASIP J. Image Video Process..

[4]  Nikos Komodakis,et al.  Wide Residual Networks , 2016, BMVC.

[5]  Ales Leonardis,et al.  Material Classification in theWild: Do Synthesized Training Data Generalise Better than Real-world Training Data? , 2017, VISIGRAPP.

[6]  Anca Sticlaru,et al.  Material Classification using Neural Networks , 2017, ArXiv.

[7]  Bela Julesz,et al.  A theory of preattentive texture discrimination based on first-order statistics of textons , 2004, Biological Cybernetics.

[8]  Alexei A. Efros,et al.  What makes ImageNet good for transfer learning? , 2016, ArXiv.

[9]  Michael S. Landy,et al.  Visual perception of texture , 2002 .

[10]  Leon A. Gatys,et al.  A Neural Algorithm of Artistic Style , 2015, ArXiv.

[11]  Matthias Bethge,et al.  ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness , 2018, ICLR.

[12]  Edward H. Adelson,et al.  Material perception: What can you see in a brief glance? , 2010 .

[13]  Noah Snavely,et al.  Material recognition in the wild with the Materials in Context Database , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[14]  Serge J. Belongie,et al.  Arbitrary Style Transfer in Real-Time with Adaptive Instance Normalization , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).

[15]  Hendrik P. A. Lensch,et al.  Transfer Learning for Material Classification using Convolutional Networks , 2016, ArXiv.

[16]  Guigang Zhang,et al.  Deep Learning , 2016, Int. J. Semantic Comput..

[17]  Alex Krizhevsky,et al.  Learning Multiple Layers of Features from Tiny Images , 2009 .

[18]  Ales Leonardis,et al.  Evaluating Deep Convolutional Neural Networks for Material Classification , 2017, VISIGRAPP.

[19]  Mark D. McDonnell,et al.  Training wide residual networks for deployment using a single bit for each weight , 2018, ICLR.