Improving the Accuracy of Prediction Applications by Efficient Tuning of Gradient Descent Using Genetic Algorithms

Gradient Descent is an algorithm very used by Machine Learning methods, as Recommender Systems in Collaborative Filtering. It tries to find the optimal values of some parameters in order to minimize a particular cost function. In our research case, we consider Matrix Factorization as application of Gradient Descent, where the optimal values of two matrices must be calculated for minimizing the Root Mean Squared Error criterion, given a particular training dataset. However, there are two important parameters in Gradient Descent, both constant real numbers, whose values are set without any strict rule and have a certain influence on the algorithm accuracy: the learning rate and regularization factor. In this work we apply a evolutionary metaheuristic for finding the optimal values of these two parameters. To that end, we consider as experimental framework the Prediction Student Performance problem, a problem tackled as Recommender System with training and test datasets extracted for real cases. After performing a direct search of the optimal values, we apply a Genetic Algorithm obtaining best results of the Gradient Descent accuracy with less computational effort.