Layer Regeneration Network With Parameter Transfer and Knowledge Distillation for Intelligent Fault Diagnosis of Bearing Using Class Unbalanced Sample

In recent years, more and more researchers are using deep learning to monitor and diagnose mechanical equipment faults. When new task data not considered in the training stage are generated during the operation of the equipment, it is difficult for the model to recognize this type of data. If only new task data are used in training, it will lead to poor performance in the old task. When using all the data in training, with the accumulation of task data, the cost of data storage will increase and the speed of model update will be greatly reduced. Therefore, an intelligent fault diagnosis method based on the layer regeneration network under class imbalanced samples is proposed, which uses only new task data to update the model. The method holds that the data contain some information of other categories, but they are covered by information of their own categories, and the knowledge between classes is extracted by knowledge distillation to enhance the learning of other categories. First, the cross-domain learning method based on parameter transfer is adopted to make the layer regeneration network model (LRNM) converge quickly on the new task. Then, the implicit knowledge related to the old task in the new task data is extracted by the distillation learning method to adjust global parameters, alleviate the catastrophic forgetting problem in model updating, and realize the model continuous learning. Through experiments, the use of dark knowledge can effectively enhance the learning of other types of knowledge.