Optimal projection to improve parametric importance sampling in high dimension

In this paper we propose a dimension-reduction strategy in order to improve the performance of importance sampling in high dimension. The idea is to estimate variance terms in a small number of suitably chosen directions. We first prove that the optimal directions, i.e., the ones that minimize the Kullback–Leibler divergence with the optimal auxiliary density, are the eigenvectors associated to extreme (small or large) eigenvalues of the optimal covariance matrix. We then perform extensive numerical experiments that show that as dimension increases, these directions give estimations which are very close to optimal. Moreover, we show that the estimation remains accurate even when a simple empirical estimator of the covariance matrix is used to estimate these directions. These theoretical and numerical results open the way for different generalizations, in particular the incorporation of such ideas in adaptive importance sampling schemes.

[1]  Maxime El Masri,et al.  Improvement of the cross-entropy method in high dimension through a one-dimensional projection without gradient estimation , 2020 .

[2]  Xavier Mestre,et al.  On the Asymptotic Behavior of the Sample Estimates of Eigenvalues and Eigenvectors of Covariance Matrices , 2008, IEEE Transactions on Signal Processing.

[3]  J. Beck,et al.  Important sampling in high dimensions , 2003 .

[4]  Lih-Yuan Deng,et al.  The Cross-Entropy Method: A Unified Approach to Combinatorial Optimization, Monte-Carlo Simulation, and Machine Learning , 2006, Technometrics.

[5]  Sandeep Juneja,et al.  Portfolio Credit Risk with Extremal Dependence: Asymptotic Analysis and Efficient Simulation , 2008, Oper. Res..

[6]  A. Owen,et al.  Safe and Effective Importance Sampling , 2000 .

[7]  Raj Rao Nadakuditi,et al.  The eigenvalues and eigenvectors of finite, low rank perturbations of large random matrices , 2009, 0910.2120.

[8]  Yousef El-Laham,et al.  Recursive Shrinkage Covariance Learning in Adaptive Importance Sampling , 2019, 2019 IEEE 8th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP).

[9]  Alan Edelman,et al.  Sample Eigenvalue Based Detection of High-Dimensional Signals in White Noise Using Relatively Few Samples , 2007, IEEE Transactions on Signal Processing.

[10]  Iason Papaioannou,et al.  Improved cross entropy-based importance sampling with a flexible mixture model , 2019, Reliab. Eng. Syst. Saf..

[11]  P. Diaconis,et al.  The sample size required in importance sampling , 2015, 1511.01437.

[12]  Xavier Mestre,et al.  Improved Estimation of Eigenvalues and Eigenvectors of Covariance Matrices Using Their Sample Estimates , 2008, IEEE Transactions on Information Theory.

[13]  S. Achard,et al.  Optimal shrinkage for robust covariance matrix estimators in a small sample size setting , 2019 .

[14]  Iason Papaioannou,et al.  Cross-Entropy-Based Importance Sampling with Failure-Informed Dimension Reduction for Rare Event Simulation , 2020, SIAM/ASA J. Uncertain. Quantification.

[15]  P. Bickel,et al.  Curse-of-dimensionality revisited: Collapse of the particle filter in very large scale systems , 2008, 0805.3034.

[16]  Jean-Michel Marin,et al.  Adaptive importance sampling in general mixture classes , 2007, Stat. Comput..

[17]  R. Rackwitz,et al.  Non-Normal Dependent Vectors in Structural Safety , 1981 .

[18]  A. Kiureghian,et al.  Multivariate distribution models with prescribed marginals and covariances , 1986 .

[19]  Petar M. Djuric,et al.  Adaptive Importance Sampling: The past, the present, and the future , 2017, IEEE Signal Processing Magazine.

[20]  Olivier Ledoit,et al.  A well-conditioned estimator for large-dimensional covariance matrices , 2004 .

[21]  Reiichiro Kawai Optimizing Adaptive Importance Sampling by Stochastic Approximation , 2018, SIAM J. Sci. Comput..

[22]  O. Papaspiliopoulos,et al.  Importance Sampling: Intrinsic Dimension and Computational Cost , 2015, 1511.06196.