Optimal universal learning and prediction of probabilistic concepts
暂无分享,去创建一个
We consider the following setting of the (supervised) learning problem. A sequence of input data x/sub 1/,...,x/sub t/,..., is given, one by one, and the goal is to predict the corresponding outputs y/sub 1/,...,y/sub t/,.... Our proposed solution for the supervised learning problem is Bayesian, and the contribution of this work lies in determining the optimal way to choose the Bayesian "prior" for the supervised learning problem, and observing the strong sequential, non-anticipating, structure of the resulting universal predictor.
[1] Neri Merhav,et al. A strong version of the redundancy-capacity theorem of universal coding , 1995, IEEE Trans. Inf. Theory.