On representations of the feasible set in convex optimization

We consider the convex optimization problem $${\min_{\mathbf{x}} \{f(\mathbf{x}): g_j(\mathbf{x})\leq 0, j=1,\ldots,m\}}$$ where f is convex, the feasible set $${\mathbf{K}}$$ is convex and Slater’s condition holds, but the functions gj’s are not necessarily convex. We show that for any representation of $${\mathbf{K}}$$ that satisfies a mild nondegeneracy assumption, every minimizer is a Karush-Kuhn-Tucker (KKT) point and conversely every KKT point is a minimizer. That is, the KKT optimality conditions are necessary and sufficient as in convex programming where one assumes that the gj’s are convex. So in convex optimization, and as far as one is concerned with KKT points, what really matters is the geometry of $${\mathbf{K}}$$ and not so much its representation.