Ranking estimation performance by estimator randomization and attribute support

To rank performance of estimators, we propose to use weighted averaging based on estimator randomization and attribute support, respectively. We assume that the "best" estimator random and may be any of the considered estimators. Different error metrics provide different observations of the random variable. A better estimator has a larger probability to be the "best" one according to an error metric, and thus we can translate the data of the error metric to the probability mass function (pmf) of the random variable conditioned on that error metric. We combine the pmfs by corresponding weights to obtain a fused pmf that can be used to rank the estimators. The weights are determined by an observation support vector (OSV) obtained by an observation support matrix (OSM) that is composed of similarity of each pmf pair in terms of the proposed Kullback-Leibler ratio divergence (KLRD). Weighted averaging based on attribute support needs data normalization, and its weights are determined by an attribute support vector (ASV) obtained by an attribute support matrix (ASM) that is composed of pairwise cosine similarity of attributes. The idea of estimator randomization and attribute support can also be used to solve other multiple-attribute ranking problems.

[1]  Carl D. Meyer,et al.  Matrix Analysis and Applied Linear Algebra , 2000 .

[2]  X. Rong Li,et al.  Measures for ranking estimation performance based on single or multiple performance metrics , 2013, Proceedings of the 16th International Conference on Information Fusion.

[3]  E. J. G. Pitman,et al.  The “closest” estimates of statistical parameters , 1937, Mathematical Proceedings of the Cambridge Philosophical Society.

[4]  LI X.RONG,et al.  Evaluation of estimation algorithms part I: incomprehensive measures of performance , 2006, IEEE Transactions on Aerospace and Electronic Systems.

[5]  X. Rong Li,et al.  Two classes of relative measures of estimation performance , 2007, 2007 10th International Conference on Information Fusion.

[6]  J. Gillis,et al.  Matrix Iterative Analysis , 1961 .

[7]  X. Rong Li,et al.  New robust metrics of central tendency for estimation performance evaluation , 2012, 2012 15th International Conference on Information Fusion.

[8]  Daniel T. Larose,et al.  Discovering Knowledge in Data: An Introduction to Data Mining , 2005 .

[9]  Sudip Bose,et al.  On the transitivity of the posterior Pitman closeness criterion , 1998 .