Lexicality and pronunciation in a simulated neural net
暂无分享,去创建一个
Self-supervised compressive neural nets can perform nonlinear multilevel latent structure analysis. They therefore have promise for cognitive theory. We study their use in the Seidenberg & McClelland (1989) model of reading. Analysis shows that self-supervised compression in their model can make only a limited contribution to lexical decision, and simulation shows that it interferes with the associative mapping into phonology. Self-supervised compression is therefore put to no good use in their model. This does not weaken the arguments for self-supervised compression, however, and we suggest possible beneficial uses that merit further study.