Towards intentional neural systems: experiments with MAGNUS
暂无分享,去创建一个
The term "intentionality" arises in connection with natural language understanding in a computer. The problem is not one of speech recognition. It remains a problem even if the words of the language were perfectly encoded by a speech recognizer, or even typed on a keyboard. It is believed that the ability to visualize events when hearing sentences that describe them is a clue to the way in which artificial neural networks need to be structured and trained. The assessment which gives the title to this paper is that of Searle (1992), who suggests that classical logical models fail to capture "understanding" as they have no intentional relationship with the objects they represent. Searle illustrated his point with the now well-known example of the Chinese Room where, he argued, the symbols of a language can be manipulated to give answers to questions about a sequence of symbols that make up a story. In this paper, we show that, through a process of "iconic" training, a neural state machine can develop an "intentional" representation. An example of this is shown as implemented on MAGNUS (Multiple Automata of General Neural UnitS) software.
[1] Igor Aleksander,et al. Neurons and Symbols: The Stuff That Mind Is Made of , 1993 .
[2] John R. Searle,et al. The Rediscovery of the Mind , 1995, Artif. Intell..
[3] Terry Winograd,et al. Understanding natural language , 1974 .