Towards a Digital Twin with Generative Adversarial Network Modelling of Machining Vibration

Transition towards Industry 4.0 relies heavily on manufacturing digitalisation. Digital twin plays a significant role among the pool of relevant technologies as a powerful tool that is expected provide digital access to detailed real-time monitoring of the physical processes and enable significant optimisation due to utilisation of big data acquired from them. Over the past years a significant number of works produced conceptual frameworks of digital twins and discussed their requirements and benefits. The research literature demonstrates application examples and proofs of concepts, although the content is less rich. This paper presents a generative model based on generative adversarial networks (GAN) for machining vibration data, discusses its performance and analyses the drawbacks. The proposed model includes process parameter inputs used to condition the features of generated signals. The control over the generator and a neural network architecture utilising techniques from style-transfer research provide the means to analyse the signal building blocks learned by the model and explore their relationship. The quality of the learned process representation is demonstrated using a dataset obtained from a machining time-domain simulation. The novel results constitute a critical component of a machining digital twin and open new research directions towards development of comprehensive manufacturing digital twins.

[1]  Chris Donahue,et al.  Adversarial Audio Synthesis , 2018, ICLR.

[2]  Soumith Chintala,et al.  Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks , 2015, ICLR.

[3]  Pieter Abbeel,et al.  InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets , 2016, NIPS.

[4]  Simon Osindero,et al.  Conditional Generative Adversarial Nets , 2014, ArXiv.

[5]  Ian Goodfellow,et al.  Generative adversarial networks , 2020, Commun. ACM.

[6]  Otmar Hilliges,et al.  Guiding InfoGAN with Semi-supervision , 2017, ECML/PKDD.

[7]  Timo Aila,et al.  A Style-Based Generator Architecture for Generative Adversarial Networks , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[8]  Fei Tao,et al.  Digital twin-driven product design, manufacturing and service with big data , 2017, The International Journal of Advanced Manufacturing Technology.

[9]  Aaron C. Courville,et al.  Improved Training of Wasserstein GANs , 2017, NIPS.

[10]  Oliver Niggemann,et al.  Data-Driven Monitoring of Cyber-Physical Systems Leveraging on Big Data and the Internet-of-Things for Diagnosis and Control , 2015, DX.

[11]  Lantao Yu,et al.  SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient , 2016, AAAI.

[12]  Thibaut Horel,et al.  Modeling Contagion Through Social Networks to Explain and Predict Gunshot Violence in Chicago, 2006 to 2014 , 2017, JAMA internal medicine.

[13]  Jeff Donahue,et al.  Large Scale GAN Training for High Fidelity Natural Image Synthesis , 2018, ICLR.

[14]  Serge J. Belongie,et al.  Arbitrary Style Transfer in Real-Time with Adaptive Instance Normalization , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).

[15]  Jaakko Lehtinen,et al.  Progressive Growing of GANs for Improved Quality, Stability, and Variation , 2017, ICLR.

[16]  Dimitris N. Metaxas,et al.  StackGAN: Text to Photo-Realistic Image Synthesis with Stacked Generative Adversarial Networks , 2016, 2017 IEEE International Conference on Computer Vision (ICCV).