Evaluating Performance of Regression Machine Learning Models Using Multiple Error Metrics in Azure Machine Learning Studio

Data driven companies effectively use regression machine learning methods for making predictions in many sectors. Cloud-based Azure Machine Learning Studio (MLS) has a potential of expediting machine learning experiments by offering a convenient and powerful integrated development environment. The process of evaluating machine learning models in Azure MLS has certain limitations, e.g. small number of performance metrics and lack of functionality to evaluate custom built regression models with R language. This paper reports the results of an effort to build an Enhanced Evaluate Model (EEM) module which facilitates and accelerates Azure experiments development and evaluation. The EEM combines multiple performance metrics allowing for multisided evaluation of the regression models. EEM offers 4 times more metrics than built-in Azure Evaluate Model module. EEM metrics include: CoD, GMRAE, MAE, MAPE, MASE, MdAE, MdAPE, MdRAE, ME, MPE, MRAE, MSE, NRMSE_mm, NRMSE_sd, RAE, RMdSPE, RMSE, RMSPE, RSE, sMAPE, SMdAPE, SSE. Also, EEM supports evaluation of the R language based regression models. The operational Enhanced Evaluate Model module has been published to the web and openly available for experiments and extensions.