Applying MDL to Learning Best Model Granularity

The Minimum Description Length (MDL) principle is solidly based on a provably ideal method of inference using Kolmogorov complexity. We test how the theory behaves in practice on a general problem in model selection: that of learning the best model granularity. The performance of a model depends critically on the granularity, for example the choice of precision of the parameters. Too high precision generally involves modeling of accidental noise and too low precision may lead to confusion of models that should be distinguished. This precision is often determined ad hoc. In MDL the best model is the one that most compresses a two-part code of the data set: this embodies ``Occam's Razor.'' In two quite different experimental settings the theoretical value determined using MDL coincides with the best value found experimentally. In the first experiment the task is to recognize isolated handwritten characters in one subject's handwriting, irrespective of size and orientation. Based on a new modification of elastic matching, using multiple prototypes per character, the optimal prediction rate is predicted for the learned parameter (length of sampling interval) considered most likely by MDL, which is shown to coincide with the best value found experimentally. In the second experiment the task is to model a robot arm with two degrees of freedom using a three layer feed-forward neural network where we need to determine the number of nodes in the hidden layer giving best modeling performance. The optimal model (the one that extrapolizes best on unseen examples) is predicted for the number of nodes in the hidden layer considered most likely by MDL, which again is found to coincide with the best value found experimentally.

[1]  Ronald L. Rivest,et al.  Inferring Decision Trees Using the Minimum Description Length Principle , 1989, Inf. Comput..

[2]  A. Kolmogorov Three approaches to the quantitative definition of information , 1968 .

[3]  Charles C. Tappert,et al.  Cursive Script Recognition by Elastic Matching , 1982, IBM J. Res. Dev..

[4]  Ming Li,et al.  An Introduction to Kolmogorov Complexity and Its Applications , 2019, Texts in Computer Science.

[5]  Jorma Rissanen,et al.  Universal coding, information, prediction, and estimation , 1984, IEEE Trans. Inf. Theory.

[6]  Heekuck Oh,et al.  Neural Networks for Pattern Recognition , 1993, Adv. Comput..

[7]  Jerome M. Kurtzberg,et al.  Feature Analysis for Symbol Recognition by Elastic Matching , 1987, IBM J. Res. Dev..

[8]  Ming Li,et al.  Minimum description length induction, Bayesianism, and Kolmogorov complexity , 1999, IEEE Trans. Inf. Theory.

[9]  Joost N. Kok,et al.  Model selection for neural networks: comparing MDL and NIC , 1994, ESANN.

[10]  David J. C. MacKay,et al.  A Practical Bayesian Framework for Backpropagation Networks , 1992, Neural Computation.

[11]  Ray J. Solomonoff,et al.  Complexity-based induction systems: Comparisons and convergence theorems , 1978, IEEE Trans. Inf. Theory.

[12]  Leslie G. Valiant,et al.  A theory of the learnable , 1984, STOC '84.

[13]  Ray J. Solomonoff,et al.  A Formal Theory of Inductive Inference. Part II , 1964, Inf. Control..

[14]  Thomas M. Cover,et al.  Elements of Information Theory , 2005 .

[15]  Per Martin-Löf,et al.  The Definition of Random Sequences , 1966, Inf. Control..

[16]  J. Rissanen,et al.  Modeling By Shortest Data Description* , 1978, Autom..

[17]  Ming Li,et al.  Inductive Reasoning and Kolmogorov Complexity , 1992, J. Comput. Syst. Sci..

[18]  Paul M. B. Vitányi,et al.  The miraculous universal distribution , 1997 .

[19]  L. M. M.-T. Theory of Probability , 1929, Nature.

[20]  Umesh V. Vazirani,et al.  An Introduction to Computational Learning Theory , 1994 .