A Double Gradient Algorithm to Optimize Regularization

We present in this article a new technique dedicated to optimise the regularization parameter of a cost function. On the one hand the derivatives of the cost function with regards to the weights permits to optimise the network. On the other the derivatives of the cost function with regards to the regularization parameter permits to optimize the smoothness of the function achieved by the network. We show that by oscillating between these two gradient descent optimisations we achieve the task of regulating the smoothness of a neural network. We present the results of this algorithm on a task design to clearly express the network's level of smoothness.