During the last two decades or so, information theory has been used as a tool to describe the probabilistic components of notated music. It has served, albeit only on occasion, both for the analysis and, somewhat secondarily, for the synthesis of music. In most cases, the information content in the pitch and/or interval structures of melodic lines has been assessed;1 entropies have been derived for the "alphabets"-pitches, pitch sequences, pitch intervals, and so forth-from which individual melodies or collections of melodies were assembled. The need to apply information theory to symbolic representations of music-rather than in some way to music itself-arises from the nature of information theory. Mathematical formulations of information content are not descriptive of conveyed meaning, but rather describe distributional aspects of the symbolic characters used in the transmission of that meaning. The input data to the calculations of information theory are encodings of a communication that can be described statistically. For written language (one of the areas for which information theory was originally developed and where its efficacy has been effectively
[1]
D. Middleton.
An Introduction to Statistical Communication Theory
,
1960
.
[2]
Constantine Frithiof Malmberg,et al.
The perception of consonance and dissonance
,
1918
.
[3]
William Hutchinson,et al.
The significance of the acoustic component of consonance in Western triads
,
1979
.
[4]
R. Plomp,et al.
Tonal consonance and critical bandwidth.
,
1965,
The Journal of the Acoustical Society of America.
[5]
C. E. SHANNON,et al.
A mathematical theory of communication
,
1948,
MOCO.
[6]
G. Reinsel,et al.
Introduction to Mathematical Statistics (4th ed.).
,
1980
.