This paper presents a generic machine learning based approach to devise performance assessment functions for any kind of optimization problem. The need of a performance assessment process taking into account robustness of the solutions is stressed and a general methodology for devising a function to estimate such a performance on any given engineering problem is formalized. This methodology is used as basis to train machine learning models capable of assessing performance of real world time series classification algorithms through the use of ratings from expert engineers as training data. Although the methodology presented is used on a time series classification problem, it possesses generic validity and can be easily applied to devise arbitrary scalar performance functions for complex multi-objective problems as well. The trained machine learning models can be understood as performance assessment functions that, having learned the engineer's “gut instinct”, are able to assess robustness performance in a much more objective way than a human expert could do. They represent key components for enabling automatic, computationally intensive processes such as multi-objective optimization or feature selection.
[1]
Radford M. Neal.
Pattern Recognition and Machine Learning
,
2007,
Technometrics.
[2]
George Cybenko,et al.
Approximation by superpositions of a sigmoidal function
,
1992,
Math. Control. Signals Syst..
[3]
Kurt Hornik,et al.
Multilayer feedforward networks are universal approximators
,
1989,
Neural Networks.
[4]
Leo Breiman,et al.
Bagging Predictors
,
1996,
Machine Learning.
[5]
W. Sievert.
European New Car Assessment Programme (Euro NCAP)
,
2000
.
[6]
William T. Hollowell,et al.
Updated review of potential test procedures for FMVSS no.208
,
1999
.