Fast Transient Simulation of High-Speed Channels Using Recurrent Neural Network

Generating eye diagrams by using a circuit simulator can be very computationally intensive, especially in the presence of nonlinearities. It often involves multiple Newton-like iterations at every time step when a SPICE-like circuit simulator handles a nonlinear system in the transient regime. In this paper, we leverage machine learning methods, to be specific, the recurrent neural network (RNN), to generate black-box macromodels and achieve significant reduction of computation time. Through the proposed approach, an RNN model is first trained and then validated on a relatively short sequence generated from a circuit simulator. Once the training completes, the RNN can be used to make predictions on the remaining sequence in order to generate an eye diagram. The training cost can also be amortized when the trained RNN starts making predictions. Besides, the proposed approach requires no complex circuit simulations nor substantial domain knowledge. We use two high-speed link examples to demonstrate that the proposed approach provides adequate accuracy while the computation time can be dramatically reduced. In the high-speed link example with a PAM4 driver, the eye diagram generated by RNN models shows good agreement with that obtained from a commercial circuit simulator. This paper also investigates the impacts of various RNN topologies, training schemes, and tunable parameters on both the accuracy and the generalization capability of an RNN model. It is found out that the long short-term memory (LSTM) network outperforms the vanilla RNN in terms of the accuracy in predicting transient waveforms.

[1]  Samy Bengio,et al.  Scheduled Sampling for Sequence Prediction with Recurrent Neural Networks , 2015, NIPS.

[2]  Yoshua Bengio,et al.  Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation , 2014, EMNLP.

[3]  Nikolay Laptev,et al.  Deep and Confident Prediction for Time Series at Uber , 2017, 2017 IEEE International Conference on Data Mining Workshops (ICDMW).

[4]  Alex Graves,et al.  Generating Sequences With Recurrent Neural Networks , 2013, ArXiv.

[5]  Geoffrey E. Hinton,et al.  Training Recurrent Neural Networks , 2013 .

[6]  Jimmy Ba,et al.  Adam: A Method for Stochastic Optimization , 2014, ICLR.

[7]  Elyse Rosenbaum,et al.  Verilog-A compatible recurrent neural network model for transient circuit simulation , 2017, 2017 IEEE 26th Conference on Electrical Performance of Electronic Packaging and Systems (EPEPS).

[8]  Ruey-Beei Wu,et al.  Fast Methodology for Determining Eye Diagram Characteristics of Lossy Transmission Lines , 2009, IEEE Transactions on Advanced Packaging.

[9]  Quoc V. Le,et al.  Sequence to Sequence Learning with Neural Networks , 2014, NIPS.

[10]  Weicong Na,et al.  A review of neural network based techniques for nonlinear microwave device modeling , 2016, 2016 IEEE MTT-S International Conference on Numerical Electromagnetic and Multiphysics Modeling and Optimization (NEMO).

[11]  Yoshua Bengio,et al.  Professor Forcing: A New Algorithm for Training Recurrent Networks , 2016, NIPS.

[12]  Chao Zhang,et al.  Parametric Modeling of EM Behavior of Microwave Components Using Combined Neural Networks and Pole-Residue-Based Transfer Functions , 2016, IEEE Transactions on Microwave Theory and Techniques.

[13]  Chien-Nan Jimmy Liu,et al.  A novel approach for high-level power modeling of sequential circuits using recurrent neural networks , 2005, 2005 IEEE International Symposium on Circuits and Systems.

[14]  Jing Peng,et al.  An Efficient Gradient-Based Algorithm for On-Line Training of Recurrent Network Trajectories , 1990, Neural Computation.

[15]  Lukasz Kaiser,et al.  Attention is All you Need , 2017, NIPS.

[16]  Ken Wu,et al.  High-Speed Channel Modeling With Machine Learning Methods for Signal Integrity Analysis , 2018, IEEE Transactions on Electromagnetic Compatibility.

[17]  Razvan Pascanu,et al.  Understanding the exploding gradient problem , 2012, ArXiv.

[18]  Chan Hong Goay,et al.  EYE-HEIGHT/WIDTH PREDICTION USING ARTIFICIAL NEURAL NETWORKS FROM S-PARAMETERS WITH VECTOR FITTING , 2018 .

[19]  Jeffrey L. Elman,et al.  Finding Structure in Time , 1990, Cogn. Sci..

[20]  Yazi Cao,et al.  A combined transfer function and neural network method for modeling via in multilayer circuits , 2008, 2008 51st Midwest Symposium on Circuits and Systems.

[21]  Yoshua Bengio,et al.  Deep Sparse Rectifier Neural Networks , 2011, AISTATS.

[22]  H. Robbins A Stochastic Approximation Method , 1951 .

[23]  Jose E. Schutt-Aine,et al.  A pseudo-supervised machine learning approach to broadband LTI macro-modeling , 2018, 2018 IEEE International Symposium on Electromagnetic Compatibility and 2018 IEEE Asia-Pacific Symposium on Electromagnetic Compatibility (EMC/APEMC).

[24]  Dipanjan Gope,et al.  S-Parameter and Frequency Identification Method for ANN-Based Eye-Height/Width Prediction , 2017, IEEE Transactions on Components, Packaging and Manufacturing Technology.

[25]  A. DeHon,et al.  Parallelizing sparse Matrix Solve for SPICE circuit simulation using FPGAs , 2009, 2009 International Conference on Field-Programmable Technology.

[26]  Gao Xue-lian,et al.  An artificial neural network model for S-parameter of microstrip line , 2013, 2013 Asia-Pacific Symposium on Electromagnetic Compatibility (APEMC).

[27]  Koray Kavukcuoglu,et al.  Visual Attention , 2020, Computational Models for Cognitive Vision.

[28]  Azam Beg,et al.  Using Recurrent Neural Networks for Circuit Complexity Modeling , 2006, 2006 IEEE International Multitopic Conference.

[29]  Yu Wang,et al.  GPU-Accelerated Sparse LU Factorization for Circuit Simulation with Performance Modeling , 2015, IEEE Transactions on Parallel and Distributed Systems.