暂无分享,去创建一个
We revisit and slightly modify the proof of the Gaussian Hanson-Wright inequality where we keep track of the absolute constant in its formulation. In this short report we investigate the following concentration of measure inequality which is a special case1 of the Hanson-Wright inequality [1], [2]: Theorem 1 (Gaussian Hanson-Wright Inequality). Let x ∼ N(0, In) be a standard Gaussian vector of length n. If A is a nonzero n× n matrix, then Pr ( |xAx− E[xAx]| ≥ a ) ≤ 2 exp ( − κmin { a ‖A‖2 , a ‖A‖ }) , (1) for every a > 0 where κ is an absolute constant that does not depend on n, A and a. Here, ‖A‖2 is the Hilbert-Schmidt norm of A defined by ‖A‖2 := (tr(AA)) 1 2 and ‖A‖ is the operator norm of A given by ‖A‖ := max ‖x‖2≤1 ‖Ax‖2 = (λmax(AA)) 1 2 , where λmax(M) denotes the largest eigenvalue of a matrix M . The largest value for κ in (1) is unknown. Even a lower bound on κ is not reported in the literature. The following proposition presents a value for κ in the special case where the matrix A in (1) is a real symmetric matrix: In the general formulation of the Hanson-Wright inequality, the entries of x are i.i.d. random variables with zero mean, unit variance and sub-Gaussian tail decay. ar X iv :2 11 1. 00 55 7v 1 [ m at h. PR ] 3 1 O ct 2 02 1
[1] Dudley,et al. Real Analysis and Probability: Measurability: Borel Isomorphism and Analytic Sets , 2002 .
[2] M. Rudelson,et al. Hanson-Wright inequality and sub-gaussian concentration , 2013 .
[3] F. T. Wright,et al. A Bound on Tail Probabilities for Quadratic Forms in Independent Random Variables , 1971 .