Distributed nuclear norm minimization for matrix completion

The ability to recover a low-rank matrix from a subset of its entries is the leitmotif of recent advances for localization of wireless sensors, unveiling traffic anomalies in backbone networks, and preference modeling for recommender systems. This paper develops a distributed algorithm for low-rank matrix completion over networks. While nuclear-norm minimization has well-documented merits when centralized processing is viable, the singular-value sum is non-separable and this challenges its minimization in a distributed fashion. To overcome this limitation, an alternative characterization of the nuclear norm is adopted which leads to a separable, yet non-convex cost that is minimized via the alternating-direction method of multipliers. The novel distributed iterations entail reduced-complexity per node tasks, and affordable message passing between single-hop neighbors. Interestingly, upon convergence the distributed (non-convex) estimator provably attains the global optimum of its centralized counterpart, regardless of initialization. Simulations corroborate the convergence of the novel distributed matrix completion algorithm, and its centralized performance guarantees.