Image acquisition, evaluation and segmentation of thermal cutting edges using a mobile device

In sheet metal production the quality of a cut determines the conditions for a possible postprocessing. Considering the roughness as a parameter for assessing the quality of the cut edge, different techniques have been developed that use texture analysis and convolutional neural networks. All methods available require the use of appropriate equipment and work only in fixed light conditions. In order to discover new applications in the contexts of Industry 4.0, there is a necessity to go beyond their intrinsic limits as camera types and light condition while ensuring the same level of performance. Taking into account the strong increase of the smartphones features in recent years and the fact that their performance in some respect is now comparable to that of a PC with a middle-range mirrorless camera, it is no longer utopian to think of a new out-of-the-box use of these devices that employs the capability in a new way and in a new context. Therefore, we present a method that uses a mobile device with a camera to guarantee images of sufficient quality that can be used for further processing in order to determine the quality of the metal sheet edge. After the image acquisition of the sheet metal edge in real condition of use, the method uses a trained deep neural network to identify the sheet metal edge present in the picture. After the segmentation a no-reference image quality algorithm provides an image quality index, in terms of blurriness, for the image region of the cut edge. This way it is possible for the further evaluation of the cut edge to only consider image data that satisfies a specific quality, ignoring all the parts of the picture with a bad image quality.

[1]  Roberto Cipolla,et al.  SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation , 2015, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[2]  Kaiming He,et al.  Mask R-CNN , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).

[3]  George Papandreou,et al.  Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation , 2018, ECCV.

[4]  Sebastian Thrun,et al.  Dermatologist-level classification of skin cancer with deep neural networks , 2017, Nature.

[5]  Sylvain Paris,et al.  Automatic Portrait Segmentation for Image Stylization , 2016, Comput. Graph. Forum.

[6]  Brian C. Toy,et al.  SMARTPHONE-BASED DILATED FUNDUS PHOTOGRAPHY AND NEAR VISUAL ACUITY TESTING AS INEXPENSIVE SCREENING TOOLS TO DETECT REFERRAL WARRANTED DIABETIC EYE DISEASE , 2016, Retina.

[7]  Trevor Darrell,et al.  Fully Convolutional Networks for Semantic Segmentation , 2017, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[8]  Markus Ulrich,et al.  MVTec D2S: Densely Segmented Supermarket Dataset , 2018, ECCV.

[9]  Pietro Perona,et al.  Microsoft COCO: Common Objects in Context , 2014, ECCV.

[10]  Kanjar De,et al.  Image Sharpness Measure for Blurred Images in Frequency Domain , 2013 .

[11]  J. Stahl,et al.  Quick roughness evaluation of cut edges using a convolutional neural network , 2019, International Conference on Quality Control by Artificial Vision.

[12]  Mark Sandler,et al.  MobileNetV2: Inverted Residuals and Linear Bottlenecks , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[13]  François Chollet,et al.  Xception: Deep Learning with Depthwise Separable Convolutions , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).