Statistics and Machine Learning Experiments in Poetry
暂无分享,去创建一个
This paper presents a quantitative approach to poetry, based on the use of several statistical measures (entropy, information energy, N-gram, etc.) applied to a few characteristic English writings. We found that English language changes its entropy as time passes, and that entropy depends on the language used and on the author. In order to compare two similar texts, we were able to introduce a statistical method to asses the information entropy between two texts. We also introduced a method of computing the average information conveyed by a group of letters about the next letter in the text. We found a formula for computing the Shannon language entropy and we introduced the concept of N-gram informational energy of a poetry. We also constructed a neural network, which is able to generate Byron-type poetry and to analyze the information proximity to the genuine Byron poetry.
[1] Jürgen Schmidhuber,et al. Long Short-Term Memory , 1997, Neural Computation.
[2] Claude E. Shannon,et al. Prediction and Entropy of Printed English , 1951 .
[3] C. E. SHANNON,et al. A mathematical theory of communication , 1948, MOCO.
[4] Geoffrey E. Hinton,et al. Learning representations by back-propagating errors , 1986, Nature.