Machine learning-based acceleration of Genetic Algorithms for Parameter Extraction of highly dimensional MOSFET Compact Models

The need for more accurate simulations has pushed scientists and engineers to design better, more accurate and more complex MOSFET compact models. This has been supported by the big improvements in computational power and speed in the last decades. The number of parameters of the compact models has increased to hundreds and thousands and it is far beyond what the human mind can handle. As a results, the calibration of the models to represent the real characteristics of the device, also known as parameter extraction, is a complex and time consuming task. To solve this problem, many automatic techniques have been tried and the most promising ones are based on genetic algorithms. Genetic algorithms on the other side, although appropriate for such tasks, require a large number of simulations to converge to a good solution. In this paper we propose a methodology to drastically reduce the number of simulations by introducing a combination of genetic algorithms and surrogate models as classifiers. The state of the art about the combination of surrogate models and genetic algorithms is exclusively focused on how to use surrogate models to substitute the expensive simulations. Our novel approach consists on adding a classifier layer between the genetic algorithm and the simulations, which filters out a significant number of non-promising parameter sets that do not need to be simulated at all. In this research, differential evolution was used as the genetic algorithm and after a careful evaluation of several classifier types, the decision tree classifier was selected as the best performing one. The method was tested with two complex real life problems, BSIM4 and HiSIM-HV MOSFET compact models, and the results show that up to 70% of the simulations could be eliminated without disturbing the convergence of the algorithm and maintaining the accuracy of the solution.