Predicting Memory Compiler Performance Outputs Using Feed-forward Neural Networks

Typical semiconductor chips include thousands of mostly small memories. As memories contribute an estimated 25% to 40% to the overall power, performance, and area (PPA) of a product, memories must be designed carefully to meet the system’s requirements. Memory arrays are highly uniform and can be described by approximately 10 parameters depending mostly on the complexity of the periphery. Thus, to improve PPA utilization, memories are typically generated by memory compilers. A key task in the design flow of a chip is to find optimal memory compiler parametrizations that, on the one hand, fulfill system requirements while, on the other hand, they optimize PPA. Although most compiler vendors also provide optimizers for this task, these are often slow or inaccurate. To enable efficient optimization in spite of long compiler runtimes, we propose training fully connected feed-forward neural networks to predict PPA outputs given a memory compiler parametrization. Using an exhaustive search-based optimizer framework that obtains neural network predictions, PPA-optimal parametrizations are found within seconds after chip designers have specified their requirements. Average model prediction errors of less than 3%, a decision reliability of over 99%, and productive usage of the optimizer for successful, large volume chip design projects illustrate the effectiveness of the approach.

[1]  Student,et al.  THE PROBABLE ERROR OF A MEAN , 1908 .

[2]  Jimmy Ba,et al.  Adam: A Method for Stochastic Optimization , 2014, ICLR.

[3]  Leo Breiman,et al.  Random Forests , 2001, Machine Learning.

[4]  Bin Wu,et al.  OpenRAM: An open-source memory compiler , 2016, 2016 IEEE/ACM International Conference on Computer-Aided Design (ICCAD).

[5]  Rich Caruana,et al.  Overfitting in Neural Nets: Backpropagation, Conjugate Gradient, and Early Stopping , 2000, NIPS.

[6]  Eric R. Ziegel,et al.  The Elements of Statistical Learning , 2003, Technometrics.

[7]  Zuochang Ye,et al.  An Efficient SRAM Yield Analysis and Optimization Method With Adaptive Online Surrogate Modeling , 2015, IEEE Transactions on Very Large Scale Integration (VLSI) Systems.

[8]  D. W. Scott,et al.  Multivariate Density Estimation, Theory, Practice and Visualization , 1992 .

[9]  Chris Tofallis,et al.  Erratum: A better measure of relative prediction accuracy for model selection and model estimation , 2015, J. Oper. Res. Soc..

[10]  Yoav Freund,et al.  A decision-theoretic generalization of on-line learning and an application to boosting , 1995, EuroCOLT.

[11]  J. Friedman Greedy function approximation: A gradient boosting machine. , 2001 .

[12]  David W. Scott,et al.  Multivariate Density Estimation: Theory, Practice, and Visualization , 1992, Wiley Series in Probability and Statistics.

[13]  David Harris,et al.  CMOS VLSI Design: A Circuits and Systems Perspective , 2004 .

[14]  Frank Hutter,et al.  Speeding Up Automatic Hyperparameter Optimization of Deep Neural Networks by Extrapolation of Learning Curves , 2015, IJCAI.

[15]  Chris Tofallis,et al.  A better measure of relative prediction accuracy for model selection and model estimation , 2014, J. Oper. Res. Soc..

[16]  Dimitris Kanellopoulos,et al.  Data Preprocessing for Supervised Leaning , 2007 .

[17]  S. Shapiro,et al.  An Analysis of Variance Test for Normality (Complete Samples) , 1965 .

[18]  Gaël Varoquaux,et al.  Scikit-learn: Machine Learning in Python , 2011, J. Mach. Learn. Res..

[19]  Wei Cai,et al.  Efficient Yield Optimization for Analog and SRAM Circuits via Gaussian Process Regression and Adaptive Yield Estimation , 2018, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems.

[20]  Bradley J. Erickson,et al.  Toolkits and Libraries for Deep Learning , 2017, Journal of Digital Imaging.

[21]  Trevor Hastie,et al.  The Elements of Statistical Learning , 2001 .

[22]  S. Morley,et al.  Measures of Model Performance Based On the Log Accuracy Ratio , 2018 .