Abstract The learning procedure in a learning system results in the gradual acquisition of information about the environment. This paper discusses how to measure such information and how it fits into the well-known theory of learning. For this purpose the author introduces a concept of the “learning automaton” that permits the systematic description of most learning machines. It is shown that an expression for the amount of information learned can be determined uniquely up to a constant time under four natural requirements. Apparently, this form is equivalent to the mutual information of information theory. Selecting two simple parametric learning machines as examples, we check the effectiveness of the concept of the amount of learned information in relation to some types of convergence and learning rate.
[1]
A. Dvoretzky.
On Stochastic Approximation
,
1956
.
[2]
Abraham Ginzburg,et al.
Algebraic theory of automata
,
1968
.
[3]
Norman M. Abramson,et al.
Learning to recognize patterns in a random environment
,
1962,
IRE Trans. Inf. Theory.
[4]
Amiel Feinstein,et al.
Foundations of Information Theory
,
1959
.
[5]
Daniel G. Keehn,et al.
A note on learning for Gaussian properties
,
1965,
IEEE Trans. Inf. Theory.
[6]
D. Lindley.
On a Measure of the Information Provided by an Experiment
,
1956
.