Robust Principal Component Analysis with Side Information
暂无分享,去创建一个
7 Appendix 7.1 Preliminaries We first revisit some basic properties of defined linear operators and projections. Recall that H 0 = U ⌃V T is the reduced SVD of H 0 , and the space T is defined as: T := {UA T + BV T | A, B 2 R d⇥r }, and P T is the orthogonal projection onto T. It is known that any subgradient of kH 0 k ⇤ has the form UV T + W , where P T W = 0, kW k 1. Similarly, we have defined ⌦ to be the set of matrices whose entries supported as the same as S 0 , and P ⌦ is the orthogonal projection onto ⌦. It is also known that any subgradient of kS 0 k 1 takes the form sgn(S 0) + F , where P ⌦ F = 0, kF k 1 1. Under the incoherence assumptions, we also introduce a norm inequality on rank-1 matrices which we will use frequently in the proof. Given any matrix with the form x i y