A Minimax Method for Learning Functional Networks

In this paper, a minimax method for learning functional networks is presented. The idea of the method is to minimize themaximum absolute error between predicted and observed values. In addition, the invertible functions appearing in the modelare assumed to be linear convex combinations of invertible functions. This guarantees the invertibilityof the resulting approximations. The learning method leads to a linear programming problem and then: (a) the solution isobtained in a finite number of iterations, and (b) the global optimum is attained. The method is illustrated withseveral examples of applications, including the Hénon and Lozi series. The results show that the method outperforms standard least squares direct methods.