A MOREAU-YOSIDA REGULARIZATION OF A DIFFERENCE OF TWO CONVEX FUNCTIONS ∗

We present a scheme to minimize a difference of two convex functions by solv- ing a variational problem. The proposed scheme uses a proximal regularization step (see (8)) to construct a translated fixed point iteration. It can be seen as a descent scheme which takes into consideration the convex properties of the two convex functions separately. A direct application of the proposed algorithm to variational inclusion is given. 1 Preliminaries In non-convex programming problems, the fundamental property of convex problems concerning the fact that local solutions are global ones is not true anymore. Therefore, methods using only local information are insufficient to locate global minima. Thus, optimality conditions for nonconvex optimization problems have to take into account the form and the structure of the model. Here in this work, we are interested on a certain class of models called d.c. problems. These problems deal with a minimization or maximization of a difference of two or more convex functions. It is well known that with two convex functions g and h the sum g + h is again a convex function, as is the maximum max{g,h} and the multiple λg for any positive λ .T he difference g − h, however, is not a convex function any more. This is why d.c. problems are difficult as they are nonconvex problems. In this work, we are presenting a regularization approach to find out the minimum using the convexity of the both functions involved in the d.c. model. The presented scheme is a type of descent method to locate the minimum. Let E be a finite-dimensional vector space and let k ·,·ldenotes the inner product. Θ(E) denotes the set of convex proper and lower semi-continuous functions on E.L et f be a d.c. function on E that means there exist two functions g and h both in Θ(E) such that: f(x )= g(x) − h(x), ∀x ∈ E.