We investigate some theoretical properties of kernelized control functionals (CFs), a recent technique for variance reduction, regarding its stability when applied to subsets of input distributions or biased generating distributions. This technique can be viewed as a highly efficient control variate obtained by carefully choosing a function of the input variates, where the function lies in a reproducing kernel Hilbert space with known mean thus ensuring unbiasedness. In large-scale simulation analysis, one often faces many input distributions for which some are amenable to CFs and some may not due to technical difficulties. We show that CFs retain good theoretical properties and lead to variance reduction in these situations. We also show that, even if the input variates are biasedly generated, CFs can correct for the bias but with a price on estimation efficiency. We compare these properties with importance sampling, in particular a version using a similar kernelized approach.
[1]
N. Chopin,et al.
Control functionals for Monte Carlo integration
,
2014,
1410.2392.
[2]
Qiang Liu,et al.
A Kernelized Stein Discrepancy for Goodness-of-fit Tests
,
2016,
ICML.
[3]
Qiang Liu,et al.
Black-box Importance Sampling
,
2016,
AISTATS.
[4]
Felipe Cucker,et al.
Best Choices for Regularization Parameters in Learning Theory: On the Bias—Variance Problem
,
2002,
Found. Comput. Math..
[5]
Arthur Gretton,et al.
A Kernel Test of Goodness of Fit
,
2016,
ICML.
[6]
M. Girolami,et al.
Convergence rates for a class of estimators based on Stein’s method
,
2016,
Bernoulli.
[7]
Paul Glasserman,et al.
Large Sample Properties of Weighted Monte Carlo Estimators
,
2005,
Oper. Res..