Conservative Stochastic Optimization: $\mathcal{O}(T^{-1/2})$ Optimality Gap with Zero Constraint Violation
暂无分享,去创建一个
This paper considers stochastic convex optimization problems where the objective and constraint functions involve expectations with respect to the data indices or environmental variables, in addition to deterministic convex constraints on the domain of the variables. Although the setting is generic and arises in different machine learning applications, online and efficient approaches for solving such problems has not been widely studied. Since the underlying data distribution is unknown a priori, closed-form solution is generally not available, and classical deterministic optimization paradigms are not applicable. Existing approaches towards solving these problems make use of stochastic gradients of the objective and constraints that arrive sequentially over iterations. State-of-the-art approaches, such as those using the saddle point framework, are able to ensure that the optimality gap as well as the constraint violation decay as $\mathcal{O}(T^{-\frac{1}{2}})$ where $T$ is the number of stochastic gradients. In this work, we propose a novel conservative stochastic optimization algorithm (CSOA) that achieves zero average constraint violation and $\mathcal{O}(T^{-\frac{1}{2}})$ optimality gap. The efficacy of the proposed algorithm is tested on a relevant problem of fair classification.